<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

The Risks of AI in Banking

author
6 min read
Oct 17, 2023

From chatbots assisting banking consumers to machines preventing fraudulent transactions, AI appears poised to transform the financial industry. Using large and complex data sets, FIs can leverage AI to develop more refined risk analyses and selectively target consumers with unique product and service offerings.

But with the potential benefits of artificial intelligence, financial institutions cannot afford to ignore the risks of AI in banking.

According to a recent report from CCG Catalyst Consulting Group, banking leaders rank AI, banking-as-a-service (BaaS), and digital currencies as the top three technological innovations that will reshape the landscape for financial institutions in the coming decade. However, implementing AI in the banking sector requires human oversight and a thoughtful risk-based approach.

While community banks and credit unions will not halt the pace of progress, they must fully understand the risks of AI in banking, acknowledging and addressing them with open eyes.

This article will identify AI risks for financial institutions, discuss the regulatory expectations for AI’s safe and sound use, and offer practical suggestions for navigating our brave new world of machine learning.

Related: AI Is Already Costing Financial Institutions Millions: Here's How to Manage the Risk

Best practices for an AI risk analysis

What should your financial institution consider in crafting an AI risk analysis?

Your AI risk analysis should begin by thoroughly appreciating and addressing existing data and privacy laws. Enacted in 1999, the Gramm-Leach-Bliley Act (GLBA) mandated the protection of nonpublic personal information, including consumers' Social Security numbers, credit and income history, addresses, and other potentially sensitive data.

Subsequent data privacy laws, such as the EU’s General Data Protection Regulations (GDPR) and the California Consumer Privacy Act (CCPA), have further expanded the scope of data privacy, giving consumers the right to opt out of companies selling their private information and empowering them to request the deletion of personal data.

AI use cases for financial institutions run into problems regarding data protection and privacy. To understand these privacy risks, let’s examine Neural Language Models (NLMs), which include the popular AI tool ChatGPT.

NLMs introduce a potential efficiency for FIs to automate manual processes and enhance the consumer banking experience. NLMs take many inputs – for instance, an institution’s entire loan portfolio – and generate outputs.

If an institution entered the information from its loan portfolio into an NLM, including personal information about applicants, AI-powered chatbots could answer consumers' queries regarding their loans with greater precision and accuracy.

Consumers would be less likely to speak with a representative about the status of their loan, making employees more productive. AI-powered chatbots could also offer more personalized service.

However, financial institutions must be cautious when uploading information onto an AI platform. Compliance with consumer privacy and data protection laws is a significant risk FIs face in using AI.

Privacy risks and considerations for financial institutions using AI

Before uploading consumer or proprietary data into an NLM, financial institutions must consider the following:

  • Data Storage: How is your sensitive data being stored? Is your NLM secure? Is your data protected behind a firewall? 
  • Model Learning: An AI system’s outputs are only as good as its inputs. Garbage in, garbage out is the saying in artificial intelligence. NLMs “learn” based on the inputs people feed them. (Despite what some conspiracy theorists may think, AI is not self-aware. We’re still a very long way from Skynet.) Your financial institution requires processes and internal controls designed and implemented by humans to ensure that your inputs do not generate false and misleading outputs. 
  • Data Leakage: Uploading private and sensitive data into an NLM carries the real risk that this data will leak to the public. We have already seen examples of this which we address below. Your financial institution must ensure that legally protected consumer information and other private data do not become public. 
  • Misuse of Data: Preventing data leakage ensures that private information doesn’t fall into the hands of bad actors. Just how secure are current AI platforms? ChatGPT asserts it doesn’t store private human-machine conversations, but we must take these claims with a grain of salt. The value of financial data makes it an enticing target for criminals – and the platforms themselves – to misuse.

For now, financial institutions' best bet is to avoid uploading sensitive or legally protected data onto NLM platforms. AI tools are still in their infancy, and the risk of data falling into the wrong hands is too high.

Samsung learned this lesson the hard way. Bloomberg reported that the company banned employees from using ChatGPT after an engineer accidentally leaked internal source code on the platform that became publicly available.

While we don’t know when enterprise-grade artificial intelligence tools will be available to financial institutions, best practices in managing AI risk suggest that FIs will require on-premise solutions or platforms that rely on private, not shared, clouds. Open API platforms such as ChatGPT pose an excessive risk of data leakage and misuse.

The regulations and laws governing data privacy protections and security will play a prominent role in AI adoption for financial institutions, as will fair lending compliance laws and regulations.

Regulatory expectations for fair lending compliance with AI

AI models will force compliance officers to rethink their approach to fair lending. For years, regulators have cautioned that black-box, machine-generated algorithms cannot be the sole basis financial institutions use to justify lending decisions.

Compliance with HDMA, CRA, and soon 1071 requires that financial institutions rigorously evaluate their lending processes and outcomes for disparate impact, regardless of what their automated valuation models (AVMs) or artificial intelligence tools tell them.

Related: Artificial Intelligence and Risk Management Controls: How to Protect Your Financial Institution

We’ve already had a preview of the compliance headaches that personalized and targeted algorithms cause in the case of financial institutions' marketing across social media channels. The risks of redlining in marketing have been known by FIs for some time.

“Banks benefit by reviewing their marketing materials to understand if certain populations or geographies in a market area may potentially be excluded,” according to the FDIC.

With print marketing within a financial institution’s reasonably expected market area (REMA), it’s easy enough for FIs to review materials and outreach to address possible fair lending compliance issues. Digital marketing on platforms such as Facebook and Google changed the game for institutions because they have little control over the consumers targeted by algorithms.

AI tools will only accelerate fair lending compliance risks because, as it turns out, machines are more biased than humans. Most lending software for home mortgages heavily weighs an applicant’s FICO score. However, FICO scores often do not account for on-time rent, utilities, and cellphone bill payments, adversely impacting prohibited basis groups.

Researchers have identified even stronger bias across many newer AI applications. As agencies such as the CFPB look to a future of more advanced credit algorithms powered by AI, they want financial institutions to understand that “the machine made us do it” excuse does not relieve them of the responsibility of adhering to applicable fair lending laws.

Podcast: Financial Inclusion Isn’t Just Checking a Box

Further thoughts on AI risks in banking

While data security and compliance risk may be the most important considerations for financial institutions, the risks of AI in banking do not stop here.

Like other professions, many employees at banks and credit unions fear that AI and automation will take their jobs. We understand and empathize with these fears. At the same time, it’s essential to understand the limitations of artificial intelligence.

Implementing AI solutions at financial institutions demands robust controls and human oversight. Community banks and credit unions considering using AI cannot expect these tools to manage themselves. Employees will play a decisive role in directing them.

Other risks of AI include:

Interoperability: FIs must ensure that future AI tools integrate with existing systems and processes. Avoiding technological silos should be top of mind for financial institutions considering AI implementation.

Intellectual Property Laws: NLMs face a significant hurdle regarding using material protected under U.S. copyright laws. The New York Times and OpenAI may be headed to court to resolve their dispute over whether the tech company needs to pay the newspaper for “incorporating” NYT stories into its generated outputs. Understanding that AI outputs may be protected under copyright helps financial institutions avoid potentially costly legal actions.

Scalability: FIs must be aware that as their data volume increases, the AI tools they adopt might not be able to handle it.

Reputational Risks: We cautioned financial institutions to avoid open API platforms for uploading sensitive and legally protected data for the time being due to the risk of it falling into the wrong hands. One misstep with AI tools could cause a PR nightmare for an FI.

Overly Trusting the Machines: Regulators will not accept compliance violations because an algorithm directed financial institutions to underwrite loans for certain populations. Humans will always be necessary for validating AI outputs.

Compliance and risk officers will always be integral in validating and managing AI outputs.

It is difficult to predict how quickly AI will advance and its future value for financial institutions. However, FIs need to understand the risks of AI before adopting these tools. Rushing to upload sensitive consumer data into ChatGPT or any other NLM tool without implementing the proper controls is a terrible idea. Understanding and managing the risks of this technological marvel enables financial institutions to take full advantage of its rewards.

 

Want a primer on artificial intelligence in banking?
Check out our webinar on Managing AI Risk.

view-recording-button


Subscribe to the Nsight Blog