<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

Using AI in Financial Services: Best Practices and Red Flags

author
7 min read
May 14, 2026

Financial organizations are adopting AI faster than most governance frameworks can keep up. Some of that adoption is intentional, but a lot of it isn't. According to the Ncontracts 2026 State of TPRM Survey, only 9% of financial organizations have fully identified and documented which of their vendors use AI — and that's before accounting for the tools their employees are using. 

The challenge isn't just keeping pace with what's new. It's getting a handle on what's already there. Whether you're evaluating a new tool or revisiting your AI governance strategy, this post covers how to identify AI use across your organization, what to look for when evaluating tools, and the red flags worth knowing before you commit.

Related: Ncontracts Introduces Nquiry: AI-Powered Regulatory Intelligence 

Table of Contents

How to Identify AI in Your Organization and Vendor Network

Before you can govern AI, you have to find it — and that's harder than it sounds. The starting point is an AI inventory.  

Think of it like a Where's Waldo problem. AI shows up where you'd probably expect it: fraud detection, loan origination, and anti-money laundering (AML) monitoring. But it also shows up where you might not think to look.

Nearly three-fourths of financial organizations are only partially aware of which vendors use AI. The document management system your team relies on daily may have added AI features in the last update. Your core banking platform, HR technology, and compliance monitoring tools are probably using AI in ways that don’t get flagged during procurement. That’s not just a governance gap. It's a visibility gap that makes governance impossible. 

Shadow AI — AI used without approval or oversight from your IT, risk, or compliance teams — adds another risk layer. Employees often reach for general-purpose AI tools to summarize files, draft communications, or research regulatory questions — not out of recklessness, but because they're trying to get their work done efficiently. But when data flows through a consumer-grade platform with no contractual protections and no audit trail, you have exposure that doesn't show up on a risk assessment until something goes wrong. 

Once your inventory is established — including tools used directly by your organization and those deployed by your vendors — risk-tiering is essential. Customer-facing AI that influences lending decisions, generates adverse action notices, or informs credit decisions carries a fundamentally different risk profile than an internal productivity tool. Your governance framework should reflect that distinction, and your policy needs to document the logic behind it. 

Related: 7 Fair Lending Risks You Need to Know 

What to Look For in AI Tools

Not all AI tools are built the same way, and the differences matter in financial services. The most important question to ask any vendor is simple: can your system show its work? 

A general-purpose AI tool is designed to produce fluent, confident output, not verify it. When your compliance team uses one to research a regulatory question, the answer may look authoritative. It may even cite guidance, but does that guidance still exist? Does it reflect recent rule changes? In a regulated environment, that distinction is crucial.  

Purpose-built AI for regulated environments works differently. Here's how the two compare: 

Category General-Purpose AI Purpose-Built AI
How It Works Predicts the most statistically likely answer based on broad training data Draws from curated, current regulatory content specific to your environment 
Output Verification Not designed to verify — answers may be inaccurate, outdated, or fabricated  Sources surface with every response so outputs can be evaluated and confirmed 
Audit Trail No retrievable record of how an answer was reached  Creates a traceable path from question to answer to human decision 
Examiner Defensibility Difficult to explain, verify, or connect to human judgment  Designed to be reviewed, challenged, corrected, and documented 
Data Handling Inputs may be used for model training; limited contractual protections Enterprise-grade data protections, contractual controls, data residency 

 

The Regulatory Landscape for AI in Financial Services

Right now, there's no single AI rulebook for financial organizations, and that’s part of what makes governance so challenging. But regulatory pressure is coming from many directions, and examiners will have questions regardless of whether a unified standard exists.

Federal Model Risk Guidance

The April 2026 interagency model risk management guidance from the OCC, Federal Reserve, and FDIC replaced SR 11-7, the longstanding 2011 guidance that established the foundational framework for how financial organizations manage model risk. Generative AI and agentic AI are explicitly out of scope, but that doesn’t mean regulators are saying GenAI is safe. They're signaling that they don't yet have a framework, and that financial organizations are expected to maintain their own risk management and governance practices in the interim.  

Supervisory action can still result from unsafe or unsound practices. The absence of specific guidance does not reduce your risk. 

Fair Lending Obligations

Fair lending obligations under ECOA and Reg B apply to credit decisions, and AI doesn't change that. Bias can be introduced at the training data level without anyone intending it, and when an AI model influences a lending decision, adverse action notices still must explain the appropriate reason in terms a human can understand and defend. That accountability sits with the organization — regardless of whether a human or an algorithm made the call. 

Related: AI and Regulatory Risks: What FIs Need to Know 

State-Level Requirements

State-level requirements add another layer. Colorado, California, New York, Texas, and Illinois have all passed or proposed AI-specific laws, several of which require impact assessments, bias testing, and consumer disclosures for AI, including automated decision-making. If your organization operates across state lines, you need to navigate this patchwork.

There's no single rulebook yet, but the expectations are consistent across every framework examiners are referencing: explainability, auditability, and human oversight. Waiting for perfect regulations before building governance isn't a strategy. It's a risk. 

Related: How to Keep Up with State Regulations 

AI Red Flags to Watch

Even well-intentioned AI adoption can go sideways when the wrong signals are ignored. Here’s what to watch for. 

  • Vague answers about data use. Ask vendors whether your organization’s data is used to train their model, who has access to it, and where it's stored. If the answers are unclear, that's not a paperwork gap. It's a due diligence failure. Consumer-grade platforms often retain inputs for model improvement, which becomes a serious problem the moment sensitive regulatory or customer data flows through them. Your vendor's compliance risk is your organization's risk
  • No audit trail. A polished interface isn't the same as a defensible output. If a tool can't produce a traceable record from question to answer to human decision, it's operating as a black box, regardless of how good the answers look. In a compliance context, if it isn't documented, it didn't happen
  • Overpromising on accuracy. No AI is 100% accurate, and vendors that claim otherwise are worth scrutinizing. Pay attention to the gap between what a vendor's marketing promises and what the technology delivers. That gap has a name: AI washing. Ask how the tool handles uncertainty and whether it flags low-confidence outputs. Hallucinations — confidently stated wrong answers — are a known risk of generative AI.
  • No human oversight built in. The compliance or risk professional is still accountable for the output regardless of how it was generated. A tool designed to eliminate the human review step rather than support it wasn't built for a regulated environment. Ask specifically how the tool keeps humans in the loop — before deployment, during use, and when reviewing output in ongoing monitoring. 
  • No incident response plan. What happens when the model produces a bad output at scale? If a vendor can't point to documented escalation paths, SLAs, and remediation processes, that operational risk transfers to you the moment you deploy it. 

Your AI Action Plan: Where to Start

You don't have to have everything figured out before you begin. The organizations navigating AI well right now started with a clear sequence, not a perfect framework. 

  • This month: Build your inventory. Document every AI tool in use across your organization, including what's embedded in vendor platforms. Survey departments, pull vendor contracts, and search for terms like "machine learning," "automated decisioning," and "artificial intelligence." You can't govern what you haven't found. 
  • This quarter: Get your policy in place. An AI governance policy doesn't need to be lengthy. It needs to be clear about which tools are approved, what data can flow through them, what the review process looks like, and who is accountable. A policy without an enforcement mechanism is just a suggestion. 
  • This quarter: Address your vendor relationships. Update your due diligence process to include AI-specific questions. Build AI clauses into new and renewing contracts, and require vendors to notify you when their AI practices change.  
  • Within 60 days: Assess where you stand. Use the Cyber Risk Institute's free Adoption State Questionnaire to place your organization on the AI maturity spectrum: Initial, Minimal, Evolving, or Embedded. It takes about 30 minutes and maps directly to the frameworks examiners are referencing. 

You don't have to be an AI expert. You just need to start asking the right questions. 

Listen to the Ncast: An Executive's Take: The Future of AI in Risk and Compliance 

Not All AI Is Created Equal

Purpose-built AI for compliance works differently from a general-purpose tool. It draws from curated, current regulatory content, and it shows its work. Every response surfaces the sources behind it, creating a retrievable record that a compliance professional can review, challenge, and document. That audit trail is what separates a tool that accelerates research from one that creates new liability every time someone relies on it.

Nquiry gives compliance and risk teams what general-purpose AI can't: fast answers to regulatory questions that are accurate, sourced, and fully auditable. Built specifically for financial organizations, it’s your compliance research partner — so your team spends less time researching and more time acting on what it finds.

Learn more


Subscribe to the Nsight Blog