Skip to content

Is Artificial Intelligence the Right Technology for Risk Management?


In the quest to minimize threats and maximize rewards, risk managers have become more reliant on artificial intelligence. However, while AI is increasingly being used to spot patterns and behaviors that could indicate fraud or money laundering and, more controversially, to recognize faces to verify customer identities, its wider use to manage the risk in institutions was limited.

Now though, the release of AI chatbots like Chat GPT – which use “natural language processing” to understand user input and generate text or computer code – looks set to transform risk management functions in financial services firms.

Some experts believe that, in the next decade, artificial intelligence will be used for most areas of risk management in finance, including enabling the assessment of new types of risks, understanding how to mitigate them and automating and accelerating the work of risk managers.

“The genius is out of the bottle,” says Andrew Schwartz, an analyst at Celent, a research and consulting group specializing in financial services technology. More than half of large financial institutions are currently using AI to manage risk, he estimates.

Growing market

Conversational or “generative” AI technologies, such as OpenAI’s ChatGPT or Google’s Bard, can already analyze vast amounts of data in corporate documents, regulatory filings, stock prices, news and social media.

This can help, for example, improve current credit risk assessment methods or create more intricate and realistic “stress testing” exercises that simulate how a financial firm might handle adverse market or economic situations, he says. Schwartz. “You just have more information, and with more information, there could be a deeper and theoretically better understanding of risk.”

Sudhir Pai, chief technology and innovation officer for financial services at consulting firm Capgemini, says some financial institutions are in the early stages of using generative AI as a virtual assistant for risk managers.

These assistants collect information on financial markets and investments and can offer advice on strategies to mitigate risk. “[An] The AI ​​assistant for a risk manager would allow them to gain new risk insights in a fraction of the time,” he explains.

Financial institutions are typically reluctant to talk about any early use of generative AI for risk management, but Schwartz suggests they may face the critical problem of quality-controlling data to feed into an AI system and removing any false data.

Initially, larger companies could focus on testing generative AI in those areas of risk management where conventional AI is already widely used, such as crime detection, says Maria Teresa Tejada, partner specializing in risk, regulation and finance at Bain & Co, the global consulting firm.

Generative AI is a “game changer” for financial institutions, she says, because it allows them to capture and analyze large volumes of structured data, such as spreadsheets, but also unstructured data, such as legal contracts and call transcripts .

“Now banks can better manage risk in real time,” says Tejada.

SteelEye, a compliance software maker for financial institutions, has already tested ChatGPT with five of its customers. He created nine “prompts” for ChatGPT to use when analyzing customers’ text communications for regulatory compliance purposes.

SteelEye copied and pasted the text of customer communications, such as email threads, WhatsApp messages and Bloomberg chats, to see if ChatGPT would identify suspicious communications and flag them for further investigation. For example, you were asked to look for any signs of possible insider trading.

Matt Smith, managing director of SteelEye, says ChatGPT has proven effective in analyzing and identifying suspicious communications for further scrutiny by compliance and risk specialists.

“Something that could take compliance professionals hours to sift through could take [ChatGPT] minutes or seconds,” he notes.

Accuracy and bias

However, some have expressed concern that ChatGPT, which mines data from sources including Twitter and Reddit, could produce false information and could violate your privacy.

Smith’s downside is that ChatGPT is used purely as a tool, and compliance officers make the final decision on whether to act on the information.

However, there are questions as to whether generative AI is the right technology for highly regulated and inherently cautious risk management departments in financial institutions, where data and complex statistical models need to be carefully validated.

“ChatGPT is not the answer for risk management,” says Moutusi Sau, a financial services analyst at Gartner, a research firm.

One problem, reported by the European Risk Management Council, is that the complexity of ChatGPT and similar AI technologies can make it difficult for financial services firms to explain their systems’ decisions. Such systems, whose results are inexplicable, are known as “black boxes” in AI parlance.

Risk management AI developers and users need to be very clear about the assumptions, pain points and limitations of data, the council suggests.

Regulatory issues

An additional problem is that the regulatory approach to artificial intelligence differs around the world. In the United States, the White House recently met with the heads of technology companies to discuss the use of artificial intelligence before formulating guidelines. But the EU and China already have draft measures to regulate artificial intelligence applications. In the UK, meanwhile, the competition watchdog has begun revision in the AI ​​market.

So far, discussion of its regulation has focused on individual rights to privacy and protection from discrimination. However, a different approach may be needed to regulate AI in risk management, so that the general principles can be translated into detailed guidance for risk managers.

“My feeling is that regulators are going to work with what they have,” says Zayed Al Jamil, technology group partner at law firm Clifford Chance.

“They won’t tell [AI] It is forbidden [for risk management] or be extraordinarily prescriptive. . . I think they will update existing regulations to take AI into account,” he says.

Despite these regulatory issues and concerns about the reliability of generative AI in risk management in financial services, many in the industry believe it will become much more common. Some suggest it has the potential to improve many aspects of risk management simply by automating data analysis.

Celent’s Schwartz remains “bullish” on the potential for AI in financial institutions. “Mid-term, I think we’re going to see tremendous growth in what [AI tools] I can do,” he says.


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯