The question isn't whether AI is coming to your financial institution. It's already here — in your fraud detection system, your loan origination software, your customer service chatbot, and quite possibly in the browser your loan officer has open right now. The real question — the one examiners are already asking — is whether you're governing it or it's governing you.
According to a 2025 report, 63% of breached organizations lacked AI governance policies. The good news? Building an AI governance framework doesn't mean starting from scratch. You can extend the risk management program with a few key steps.
Related: Watch Rafael’s full breakdown in our latest on-demand webinar. Watch it now.
There's no single AI rulebook for FIs, but regulatory pressure is coming from multiple directions at once, and your examiners will still have expectations.
At the federal level, existing frameworks already apply to AI use, even when the technology isn't explicitly mentioned:
On the state level, Colorado, California, Utah, and Texas have all passed AI-related laws. If you operate across state lines, your governance framework needs to meet the most restrictive requirements you face.
And just this month, the U.S. Department of the Treasury announced a major public-private initiative to strengthen cybersecurity and AI risk management across the financial sector — a signal that AI governance is now a priority at the highest levels of government.
For anyone in the mortgage space, the deadline is even more immediate. Freddie Mac has added a formal AI and machine learning governance requirement to its Single-Family Seller/Servicer Guide, effective March 3, 2026. If you're selling loans to Freddie Mac and AI touches any part of that process — including vendor-embedded AI you may not have thought about — you need documented governance now.
Waiting for perfect regulations before acting isn't a strategy — it’s a risk.
Related: How to Manage Third-Party AI Risk: 10 Tips for Financial Institutions
Building a responsible AI risk management program comes down to four things: knowing what you have, understanding your risks, governing your vendors, and documenting everything.
| Step | Focus | What It Means |
| 1 | Build your AI inventory | Identify every AI tool in use across your institution — including shadow AI and vendor-embedded AI that may not be formally sanctioned. |
| 2 | Understand your risk | Classify each tool by risk level so you know where to focus oversight — from high-impact uses like credit underwriting to low-risk tools like grammar checkers. |
| 3 | Govern your vendors | Extend your third-party risk management program with AI-specific due diligence, contract provisions, and ongoing monitoring. |
| 4 | Document everything | Build a clear paper trail that demonstrates your AI governance program is real, operational, and examiner-ready. |
Before you can govern AI, you have to know how your staff and vendors are using it — but it's not always in the most obvious place.
Shadow AI refers to AI tools employees use without official approval or oversight from IT, risk, or compliance. For example, it's your loan officer asking ChatGPT to draft a letter to a customer or your marketing team using AI image generators.
The starting point for any AI governance program is a complete inventory. That means surveying all departments, reviewing existing vendor contracts for AI-related language — search for terms like "machine learning," "artificial intelligence," and "automated decisioning" — and getting business, risk, compliance, and IT in the same room. After all, no single function has full visibility into AI use.
For every AI tool you identify, ask: What decisions does it influence? What customer or member data does it access? Does it interact directly with customers or members? Can its outputs be explained? If an examiner asked you tomorrow to demonstrate how you identify, assess, and govern all AI use across your organization — including embedded vendor tools — what concrete evidence could you produce? Asking the right questions is key to building a strong AI governance foundation.
Related: What is AI Auditing and Why Does It Matter?
Just like your vendors, not all AI needs the same level of governance. A tiered approach lets you focus resources where the risk is highest.
The key classification questions are what data this touches, who this impacts, how significantly, and what the exposure looks like if something goes wrong.
When a vendor uses AI, it becomes your risk. You can't outsource accountability — regulators will hold you responsible for vendor AI failures just as they would your own.
A challenge FIs often face is that with conventional software, you get documentation, specs, maybe even source code, but AI models don't work that way. You may never see how the model makes a decision, and those models change over time — they learn, they drift, they get retrained on data you can't see. In other words, the model may be trained on data with built-in biases you'd never knowingly accept.
Your third-party risk management due diligence needs to reflect this reality. For any vendor using AI, ask: How was this model developed and validated? What data was it trained on? How are model updates handled, and will you notify us before material changes? What bias testing do you perform? What audit rights do we have?
Some vendors will push back, saying their systems are proprietary. Remember, you're in the driver's seat — this is your institution. Build a standardized AI due diligence questionnaire and consistently use it. It doesn't have to be perfect on day one — start with the questions you know, and it will lead to more.
One critical contract provision that's often missing: advance notice of material model changes. Vendors frequently deploy updates without telling customers. Require notification before changes go live so you can assess the impact before it affects your institution.
Related: How Generative AI Impacts Your FI’s Risk Management Program
When an examiner asks about AI governance, they're assessing whether you've thought about the risk systematically or whether you're letting it slide.
If it isn't documented, it didn't happen. A mature program includes an inventory with risk classification, a board-approved risk appetite statement, committee meeting minutes documenting ongoing oversight, vendor due diligence files with AI-specific elements, bias testing results and methodology, model validation reports, and staff training records.
Staff training is especially critical and often overlooked. Who in your FI has been trained on AI risks and policies, and on what specifically? The answer needs to be documented.
Related: How to Effectively Communicate Policies at Your Financial Institution
We started with a simple question: Are you governing AI, or is it governing you? The decisions you make now will compound — and you don't want to be playing catch-up when this technology keeps moving.
Start with your inventory. Extend your existing frameworks. Document everything. Hold your vendors accountable. That's what good looks like — and it's something you can start implementing today.
Want to go deeper? Watch the full on-demand webinar and download our free checklist for a practical, step-by-step framework to assess and govern vendor AI use.