<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

Don’t Fear Artificial Intelligence: A Primer for AI in Risk & Compliance Management (Part 1)

author
5 min read
Nov 21, 2019

Does the phrase “AI in risk and compliance management" conjure up images of robots taking over the world—or worse yet, replacing you at work?

From the Terminator to Hal in 2001: A Space Odyssey (and a spate of other films like Blade Runner and The Matrix), countless films have imagined a world where artificial intelligence has become so advanced that it becomes a threat to mankind’s very existence.

The good news is that the AI that Hollywood has imagined up is just that—imaginary. Comparing the AI capabilities of The Terminator to what AI can do today is like comparing the work of Indiana Jones with real-life archeologists. It’s all fantasy.

Debunking AI Myths

Artificial intelligence intimidates a lot of bankers, and not just because they watched i, Robot. They don’t entirely understand what AI is, what it could mean for their institution, and how it will impact their jobs.

In this first part of our new, occasional series on artificial intelligence, we’re going to tackle some common AI misperceptions. We’ll look at AI in risk and compliance management, what AI can do, what it can’t, and why it isn’t going to replace you at work.

The Terminator Isn’t Going to Happen

Artificial intelligence, or the development of computers able to perform tasks that typically require human intelligence, has been around since the 1950s. It experienced a heyday between the 1950s and 1970s as computers became faster and less expensive, allowing researchers to explore the potential for machine learning. It’s now enjoying another renaissance thanks to Big Data.

Even though we’re 70 years into exploring artificial intelligence, AI is really still in its infancy. One of the most common tropes in sci-fi and AI is “the singularity,” or the idea that AI is going to improve so much, learning at an exponential rate that it will eventually exceed human cognition. The notion is that this super smart AI will be able to improve its own algorithms, thus eclipsing humanity in milliseconds. The truth is that we are nowhere close to a singularity-type of event. At a minimum, that’s at least 30 to 50 years away, and it may never happen. The idea that we can create a machine that has the intelligence to recreate itself is akin to us creating life. Forget how smart machines are; humanity may never be that smart.

Another common misperception is that there are general purpose AI machines that think like you or I and are capable of solving any problem. Today’s AI machines have a narrow purpose. They are designed for specific tasks. For example, you could design a task-specific AI that allows someone to classify a contract based on the amount of risk it presents or determine if a control is applicable to a specific regulation. It’s the same thing with Watson, the question-answering computer that beat Jeopardy! champion Ken Jennings. While its capabilities have since been expanded, Watson was originally designed to parse natural language and learn to accurately answer questions on the trivia show.  It’s going to be a long time before a machine is able to tackle broad problems.

As for AIs that can think and feel like a human, they are even further off in the distance. An AI can only do what it’s programmed to do. If we don’t know how something works, we can’t program a machine to do it. And the truth is, we really don’t understand how human consciousness works. That means we can’t program a machine with consciousness because we have no idea how to do it.

Related: 5 factors to consider when evaluating AI/machine learning

What Can AI Do

The term “machine learning” is often paired with AI. What is machine learning? It doesn’t mean that the machine learns to feel like Janet on The Good Place after multiple reboots. Machine learning is when machines are able to look for patterns in data to make better decisions.

For example, a machine can be told that a consumer is a credit risk. It will then collect and tag data to find similar consumers, assuming those who share similar patterns or characteristics are likely to also be credit risks. It creates algorithms based on that data. The more examples of “risky” it’s provided with, the more sophisticated that algorithm will become.

This is also known as “supervised learning.” The machine is given data that’s tagged or labeled (in this case “risky” consumers are identified) and uses this experience to uncover patterns and aid in decision making.

AI Won’t Make Your Job Obsolete

Another thing AI in risk and compliance management won’t do: Replace you at work.

There are things that humans are really good at. There are things that artificial intelligence is really good at. These are not necessarily the same things.

Computers are amazing when it comes to crunching massive amounts of data. They are able to find patterns and trends that a human with finite time is unlikely to find. For example, NASA used a network of 80 personal computers equipped with AI to develop a super powerful and small antenna for space satellites. They fed it a few basic antenna designs and gave it performance parameters for the antenna. In just 10 hours the computers were able to assess and adapt millions of iterations to find a design with peak performance. The design it settled on wasn’t sleek and streamlined. It looked like a couple untwisted paper clips. It’s a very random-looking yet specific design a human would have taken much longer to develop—if we even thought of it at all.

But computers don’t win at everything.

Humans are superior when it comes to interpreting data to understand the why behind the trends. Even a 5-year-old can make connections that a machine can’t. Consider this passage:

Sally got a new cat. The cat was outside. Sally put on her shoes.

Why did Sally put on her shoes? You probably inferred that Sally wanted to go outside to be with her cat. That’s a basic assumption, yet one that a machine can’t easily make. It’s a job for humans.

Related: Artificial Intelligence (AI) and Risk Management Controls: How to Protect Your Financial Institution

AI & You: Perfect Together

The future of banking will require some combination of humans and machines working together. A smart machine can only get you so far, but a smart machine working in tandem with a smart human becomes smarter than either of them working alone.

Consider AI in risk management. A machine might be able to tell you that it is 80 percent confident in a credit score. A human risk manager may decide it’s only comfortable using the AI assessment when the confidence score is 92 percent or more. If the score is between 50 percent and 92 percent it will go to a human for review. Scores below 50 percent will be denied.

In this scenario, the slam dunk decisions will be made by a machine, freeing up employees to focus on decisions that require greater analysis and understanding. Both the machine and the human are deployed in a way that makes the most of their innate talents.

Now you know why you have nothing to fear when it comes to AI. You can even look forward to a future where it makes your job easier, reducing grunt work and allowing you and your staff to focus on more complicated decisions.

In our next installment, we’ll talk about the way AI impacts your life today, how financial institutions are using AI, and what we can expect in the near future.

 

Related: Creating Reliable Risk Assessments


Subscribe to the Nsight Blog