<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

Artificial Intelligence (AI) and Risk Management Controls: How to Protect Your Financial Institution

author
5 min read
May 11, 2023

Whether it’s a chat bot responding to a customer, social media post generators, or advanced algorithms determining the creditworthiness of loan applicants, artificial intelligence (AI) is transforming financial institutions – but it’s also introducing risk.  

What began with AI models used for Bank Secrecy Act (BSA) compliance and fraud detection is now an integral tool at many financial institutions. But using it effectively requires robust risk management controls.  

The challenges of AI-based models 

AI can reduce an institution’s workload, but it can also contribute to discrimination.  

For years we’ve seen regulatory agencies warning financial institutions about black box algorithms that make lending decisions that can’t be clearly justified. We’ve been warned that models based on previous loan decisions may inadvertently promote discrimination if the data used to develop the model was the result of biased decision making.  

Today AI’s influence is permeating many other areas of financial institutions, including marketing. In some instances, AI decides which consumers on what platforms are served ads for products and services – everything from AI-operated ad platforms and phone systems to chat bots. When AI decides which consumer is offered what products and services, it impacts the likelihood of those people choosing that product or service. (For example: If you only see ads for high-cost credit cards you might not think about applying for a card with lower rates and fees.) 

Related: Fed Strategies for Managing Fair Lending Risks of Digital Redlining 

If AI unintentionally serves up product and service recommendations that even inadvertently target different protected classes, this can lead to disparate impact. Disparate impact occurs when a seemingly neutral policy or procedure has a disproportionately adverse impact on a protected class of people. Discriminatory intent doesn’t matter with adverse impact. Discrimination is discrimination no matter the cause. Regulatory agencies will cite your institution for consumer harm. They don’t care if the robot made you do it. 

That’s why it's crucial for financial institutions to establish AI risk management controls to ensure compliance and avoid legal and reputational risks. 

Building an AI control environment  

Introducing AI creates a whole new risk environment that needs its own set of controls. The good news is that these efforts don’t have to be led by the IT department. They simply require a risk management mindset.  

Here are the steps to building an effective control environment for AI at a financial institution: 

Determine your institution’s appetite for AI 

How comfortable is your financial institution with AI? It’s the same type of question you’d ask for any risk. Once that question is answered, your financial institution can evaluate its strategic goals and align them with AI's potential benefits while considering the costs and resources required for implementation.  

Assess the risks associated with AI  

Evaluate the risks of each AI application, such as chatbots, credit decisioning engines, and social media management tools. This includes evaluating risks related to data privacy and security, potential biases and ethical concerns, regulatory compliance, and the impact on employees and customers. Institutions should also consider how AI can be integrated into their existing risk management frameworks to ensure a comprehensive approach. Remember when assessing risks that not all AI applications will possess the same amount of risk to the institution. It’s important at this stage to take a granular approach to AI risk at the application level so that the proper control environment can be established. 

Establish a tailored control environment  

Create a control environment that is specifically designed for AI applications by developing and implementing AI governance policies and procedures, such as data management, algorithmic transparency, and performance monitoring. Develop controls that match the risk level of each AI application. For example, an AI chatbot for customer queries may require fewer controls than an AI decision-making engine for residential mortgage loans due to the greater potential risk in mortgage lending. 

Monitor and review AI performance 

AI requires human oversight. For example, a person might review all AI-generated social media posts before they can be published.  

Implement regular reviews and audits of AI applications to ensure they are functioning as intended and not causing regulatory or reputational risks. This can include performance tracking, ongoing validation of AI models, and periodic audits to ensure compliance with internal policies and regulatory requirements. Institutions should also establish mechanisms to address any potential issues or concerns arising from AI applications, such as unintended biases or unexpected outcomes. There must be input and throughput validation to understand the models. 

Manage third-party AI vendors  

If a financial institution is using AI from a third-party vendor, it's essential to establish robust controls at the vendor level and maintain solid contract management to protect the organization from potential liabilities. This includes conducting thorough due diligence on vendors, negotiating contracts that clearly outline expectations and responsibilities, and implementing ongoing monitoring of vendor performance. The contract should also guarantee access to test results and other model validation documents. 

Related: Regulatory Insight: Artificial Intelligence & Third-Party Risk 

Think about how to increase vendor management controls to proactively identify any anomalies in the AI. How will you know the AI is functioning properly? How will the vendor respond if its AI does something illegal, doesn’t pass regulatory muster, or causes reputational harm? It’s all about having controls that make the vendor responsible for correcting issues and giving you sufficient access to the AI’s decision-making process.  

Train compliance and audit teams

Equip compliance monitoring and audit teams with the necessary knowledge and skills to evaluate AI applications effectively. That includes training on AI technologies and their associated risks, as well as fostering a deep understanding of the unique challenges posed by AI in the context of financial services. Institutions should also support ongoing professional development to ensure that compliance and audit teams remain up to date on emerging trends and best practices in AI governance. 

Implement audit programs 

Develop audit programs to assess the output of AI applications and ensure compliance with policies and procedures. These programs should include testing the accuracy and effectiveness of AI models, evaluating the adequacy of controls, and ensuring that AI applications align with the organization's overall risk management strategy. 

Keep updating your AI risk management controls 

AI and machine learning are all about continual improvement. Just look at ChatGPT. Since its release in November 2022, it’s seen dramatic improvements in speed, reasoning, and conciseness. With AI rapidly evolving, your institution needs to regularly assess its control environment.  

Once a year is not enough. The closer to real time your assessments are, the better. 

That means you may need to reassess your controls every time your AI vendor sends you a notification about new capabilities. They may represent new or increased risk.

Harnessing the power of AI 

As AI technology continues to evolve and become more prevalent in the financial services industry, it's essential for financial institutions to establish a robust control environment to manage the risks associated with AI applications.  

By implementing tailored controls, monitoring AI performance, and working closely with third-party vendors, financial institutions can mitigate potential regulatory and reputational risks while harnessing the power of AI to improve their operations and customer experiences. 

Want to learn more about risk assessments? Download our free whitepaper, Creating Reliable Risk Assessments. 

DOWNLOAD THE WHITEPAPER

 


Subscribe to the Nsight Blog