<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

AI Is Already Costing Financial Institutions Millions: Here's How to Manage the Risks

author
4 min read
May 30, 2024

You may think that artificial intelligence (AI) is a development your financial institution can worry about another day, but that’s a mistakeAI risk has already cost financial institutions millions – even at institutions that haven’t directly implemented AI technology.

From AI-enabled fraud perpetrated using deepfakes to the regulatory consequences of not adequately overseeing vendors using AI, the risk is real. Fortunately, there are strategies for managing AI risk.

AI is rewriting the bank fraud playbook

As if protecting against check fraud and illegal wire transfers weren’t enough, financial institutions must now worry about AI impersonation scams. AI has given a boost to synthetic fraud, which uses a mix of real and fake credentials. Meanwhile, deepfake technology lets fraudsters manipulate audio and video to imitate other people.

While the code behind deepfake AI is complex, deepfake technology is surprisingly easy to use. Websites allow unsophisticated scammers to impersonate celebrities, heads of state, and business leaders. 

In Hong Kong, an employee transferred over $25 million to fraudsters after attending a video “meeting” with their prominent CFO and colleagues. The problem? The entire thing was a hoax, using realistic deepfake video and audio. The company didn’t discover it was a scam until after sending the payment. 

A survey by behavior biometrics company BioCatch found that 51% of financial institutions lost between $5 million and $25 million to AI threats in 2023, and only 3% reported losing nothing. Bankers expect this number to increase. 

Considering that many people still fall for email phishing scams, deepfakes demand our attention – they're phishing emails on overdrive. 

What would happen if one of your employees received an urgent call or request for a video conference from a scammer pretending to be the president of a nearby bank? Do you have employee training and cybersecurity awareness programs to deal with the threat of AI fraud and its potential impact on your bottom line?

Financial institutions need controls to manage this risk. 

Related: Employee Security Awareness Training Best Practices for Financial Institutions

Wells Fargo under fire for AI underwriting discrimination

A recent class-action lawsuit against Wells Fargo contends (among other things) that the bank’s AI-based underwriting system wrongly denied mortgage applications from Black, Hispanic, and Asian borrowers or offered them higher rates than white consumers.  

Wells Fargo already paid $3.7 billion to settle a consumer compliance suit in 2022 in addition to another $250 million civil penalty from the Office of the Comptroller of the Currency (OCC) for abusive mortgage practices in 2021. Now the bank is paying lawyers to defend against a class action suit that could result in a very expensive settlement.  

For years the Consumer Financial Protection Bureau (CFPB) has cautioned financial institutions against making lending decisions using black-box algorithms without providing a specific justification for denials. AI has only enhanced this issue.  

ECOA violations expose financial institutions to expensive lawsuits, especially when regulators uncover institution-wide fair lending deficiencies. Relying on AI and other models for credit decisions elevates the risk of examiners discovering widespread fair lending violations. 

Discrimination isn’t the only risk. As the CFPB points out, ECOA protections require creditors to provide Adverse Action Notices (AANs) with specific and accurate reasons for loan denials, something that might be beyond some AI system’s capabilities. 

One thing is clear: artificial intelligence will increase fair lending risk. The alibi of “the computers made us do it” will satisfy neither regulators nor lawyers.   

If your institution is using AI underwriting models (or any algorithm), make sure you’re checking your work.

AI risk lurks in third-party relationships

When the FDIC entered a consent order with a New Jersey bank last year for fair lending compliance violations, commentators interpreted this as a “shot across the bow” aimed at AI-centered partner banking. While the consent order didn’t come with a financial penalty, it requires the bank to get regulatory approval before offering new fintech products to consumers. 

This requirement comes with a big cost. Regulatory approval can take a long time and puts pressure on compliance resources that may already be stretched thin. It could significantly limit the bank’s growth and how it serves its customers. It narrows their strategic options and takes away tools for responding to market needs.

BaaS banks must be diligent about AI risk from fintech partners as they incorporate AI features into their products and services. It’s a matter of gaining (and maintaining) a competitive advantage in an increasingly crowded marketplace. It’s important to ask if and how your vendors are using AI technology so your institution can make informed decisions about the risk the partnership presents.

Managing AI risk 

Financial institutions are still grappling with how to manage AI risk effectively. At a minimum, they’ll need the following: 

AI Risk Policies: Financial institutions need to create AI risk policies around information security, privacy, and third-party risk management. This includes AI risk assessments, a challenging task because of the way AI risk permeates throughout an institution – touching everything from vendor management and lending compliance to IT security. Financial institutions need to make sure they have the tools to assess AI risk, evaluate the control environment, and ensure risk is effectively mitigated. A spreadsheet won’t cut it. 

Reinforced Employee Security Training: Financial institutions should invest in training materials and cybersecurity awareness programs, including advice on how to identify phishing attempts, deepfakes, and other AI-related fraud (including identifying customers who may be victims of deepfakes). Frontline workers are often the first to identify and report scams, so make sure you have a system for reporting potential fraud and where they can look up relevant policies and procedures. A dedicated employee engagement platform is one way to do this. 

Third-Party Risk Management. Make sure your third-party risk management (TPRM) program assesses AI risk when evaluating potential vendors and fintech partners. You need to understand the risk and ensure there are sufficient controls.  

Model Risk Management: With model risk management, financial institutions can implement processes and controls specific to AI risk. These should be integrated into your existing enterprise risk management and third-party risk management systems and risk models. 

Compliance Management: How does your institution stay informed regarding new regulations? The agencies have demonstrated their commitment to containing the AI threat. FIs will benefit from a solution that helps them navigate AI regulatory guidance. 

AI risk is not a problem for the future. Financial institutions are already facing monetary repercussions. As artificial intelligence continues to evolve, FIs will need policies, processes, and solutions to confront the challenges (and seize the opportunities) this technological marvel presents.

Want to know more about managing AI Risk?

Check out our webinar: “Managing AI Risk: A Primer for Financial Institutions.” 

Watch the Webinar


Subscribe to the Nsight Blog