Mortgage Compliance: How Your Favorite AI Tool Could Cost You Everything
AI is amazing and is also one of the fastest ways to create compliance violations, data breaches, and lawsuits…if you’re not careful.
Artificial Intelligence is quickly becoming one of the most powerful tools in the mortgage industry.
It can write emails, analyze borrower scenarios, automate workflows, and even assist with underwriting decisions. For many loan officers and broker owners, it feels like a massive competitive advantage.
But here’s the reality that not too many people are talking about:
AI is also one of the fastest ways to create compliance violations, data breaches, and lawsuits…if you’re not careful.
Right now, we’re in what can only be described as the “wild west” of AI compliance in mortgage lending. And if history tells us anything, that window doesn’t stay open for long.
I recently sat down with compliance expert Jim Bell of MSource, and we broke down what’s happening, where the real risks are, and what you need to do right now to stay on the right side of regulators and avoid becoming the example everyone learns from.
The Biggest Misconception: “AI Did It” Is Not a Defense
Let’s get this out of the way immediately:
You are 100% responsible for anything AI produces or touches in your business.
It doesn’t matter if:
A vendor built the tool
Your loan officer found it online
A chatbot generated the response
Or AI made the underwriting recommendation
From a compliance standpoint, there is no difference between you and the technology.
Expert Insight:
“AI did it is not a defense. Regulators will apply the same laws we’ve always had, you’re responsible for the outputs, period.”
That means:
Fair lending laws still apply
Privacy laws still apply
Data security requirements still apply
Marketing compliance rules still apply
AI inherits all of it.
The Real Risk: Non-Public Personal Information (NPI)
If there’s one area that should immediately concern you, it’s this:
Loan officers uploading borrower data into AI tools.
We’re talking about:
Tax returns
Bank statements
Social Security numbers
Loan applications
Email conversations
This is Non-Public Personal Information (NPI) and mishandling it is one of the fastest ways to trigger:
Regulatory action
Lawsuits
Data breach notifications
Loss of consumer trust
Why This Is So Dangerous
Most AI tools:
Store data externally
May use inputs to train models
Are not designed for mortgage compliance
Lack proper security certifications
If you don’t know:
Where the data is going
How it’s stored
Who has access
Whether it’s being reused
Then you’re operating in a high-risk environment.
Regulators Are Already Moving (Whether You Realize It or Not)
Even though formal AI regulation is still evolving, guidance is already coming from:
Federal housing entities
State regulators
Investors and secondary market players
And the message is consistent:
1. You Must Have Policies and Procedures for AI
This is no longer optional.
You need documented:
Acceptable use policies
Data handling standards
Tool approval processes
Risk classification frameworks
2. You Must Inventory Your AI Tools
Yes, even the ones you’re not actively using.
That includes:
Chatbots
Email assistants
CRM automations
Marketing generators
Underwriting support tools
Each tool should be:
Identified
Categorized
Risk-rated (high, medium, low)
3. You Must Document Oversight
Doing the work is not enough.
You must be able to prove:
You reviewed outputs
You tested tools
You evaluated risk
You monitored usage
If it’s not documented, it didn’t happen.
The Overlooked Threat: Lawsuits (Not Just Regulators)
Most people assume regulators are the biggest threat.
They’re not.
Plaintiff attorneys are.
We’re already seeing:
TCPA lawsuits
Website compliance lawsuits
Data privacy claims
AI is the next wave.
Here’s How It Plays Out
A loan officer uploads borrower data into AI
That data is exposed, reused, or breached
A law firm finds evidence (social posts, screenshots, etc.)
Lawsuit filed
And here’s the scary part:
There are already attorneys actively looking for this.
What feels like “harmless experimentation” today could become:
Exhibit A in a lawsuit
A class action trigger
A reputational nightmare
Vendor Risk: Just Because It Exists Doesn’t Mean It’s Safe
One of the most dangerous trends right now:
Loan officers or small teams building or adopting AI tools and sharing them internally.
Sounds innovative.
But here’s the problem:
Most of these tools were not built with compliance in mind.
Questions You MUST Ask Any AI Vendor
Before using any tool, you need answers to:
Where is the data stored? (U.S. vs. international)
Is the platform SOC 2 compliant?
Does it meet GLBA (Gramm-Leach-Bliley Act) requirements?
How is data encrypted (in transit and at rest)?
Is data used to train models?
What happens in the event of a breach?
Do they have breach notification protocols?
If you can’t answer these questions confidently:
Do not use the tool with borrower data.
AI in Underwriting: Efficiency vs. Liability
AI-assisted underwriting is gaining traction fast.
But it introduces a critical issue:
Algorithmic Bias & Disparate Impact
Regulators are increasingly focused on:
How decisions are made
Whether outcomes are fair
Whether certain groups are negatively impacted
If AI contributes to a decision, you must be able to explain:
Why the decision was made
What data was used
How the outcome was determined
If you can’t?
You’re exposed to:
Fair lending violations
Discrimination claims
Investor buyback demand (unsellable loans)
The Illusion of Speed: Why “Faster” Can Cost You Everything
Yes, AI can:
Speed up processes
Reduce manual work
Increase output
But here’s the tradeoff:
Speed without oversight equals risk.
We’re already seeing cases where:
AI-driven processes created errors
Loans became unsellable
Institutions had to hold bad paper
That’s not innovation, that’s liability.
What Smart Mortgage Professionals Are Doing Right Now
The best operators aren’t avoiding AI.
They’re using it strategically and responsibly.
Here’s what that looks like:
1. Creating AI Policies Immediately
Even simple policies are better than none:
What tools are allowed
What data can be used
What’s strictly prohibited
2. Training Their Teams
Just like:
AML
RESPA
Fair lending
AI compliance training is becoming essential.
3. Vetting Every Tool
No exceptions.
Every tool goes through:
Security review
Compliance review
Risk assessment
4. Avoiding NPI in Public AI Tools
This is non-negotiable.
If the tool is not approved and secured: Do not input borrower data.
5. Documenting Everything
Policies
Reviews
Approvals
Monitoring
Because when something goes wrong, documentation is your defense.
The Bigger Picture: Regulate Yourself or Be Regulated
The mortgage industry has seen this before.
Licensing requirements
Dodd-Frank
MLO compensation rules
All came from one place:
Failure to self-regulate.
AI will follow the same path.
“If the industry doesn’t get ahead of this, regulators will step in and the result won’t be flexible or forgiving.”
The Wrap: Protect the Borrower First
Before compliance…
Before regulators…
Before lawsuits…
There’s one principle that should guide everything:
Protect your client’s data like it’s your own.
Because at the end of the day:
Trust is your business
Data is your responsibility
And AI is just a tool
How you use it determines whether it becomes your biggest advantage or your biggest liability.
You can connect with Jim Bell at MSource24.com
And if you have a story, system, or strategy that could help other brokers grow, you may even be a future guest on The Broker Journey.
Because the best lessons in this industry come from the people in the trenches doing the work every day. If that’s you, send me a message!
Author: Jason Frazier

