Digital & Resilience

BIS warns supervisors to wake up to AI risks

By Farah Khalique
BRR fallback image

Artificial intelligence (AI) is rapidly transforming the global financial services industry. According to the UK’s national hub for data science and artificial intelligence, the Alan Turing Institute, as a group of related technologies that include machine learning (ML) and deep learning, AI has the potential to disrupt and refine the existing financial services industry. 

Management consultancy firm McKinsey goes so far as to say that “to remain competitive, incumbent banks must become ‘AI-first’ in vision and execution”. Banks are doing just that. A survey conducted by the Economist Intelligence Unit, which analyses the sentiments of business executives in financial institutions across the globe, found that 86% of respondents plan to increase AI-related investment into technology by 2025.

Awareness of AI risks

However, this shift poses challenges. The Financial Stability Institute (FSI), a body within the Bank for International Settlements (BIS) that helps supervisors strengthen their financial systems, describes AI as having the potential to significantly improve delivery of financial services, but warns that it also brings risks that financial sector supervisors must grapple with. The FSI is now pushing to develop international standards. 

It put out a paper in August, ‘Humans keeping AI in check – emerging regulatory expectations in the financial sector’. The paper says: “Given emerging common themes on AI governance in the financial sector, there seems to be scope for financial standard-setting bodies to develop international guidance or standards in this area.”

The FSI uses a commonly quoted 2017 definition of AI by the Financial Stability Board (FSB), which states: “The application of computational tools to address tasks traditionally requiring human sophistication is broadly termed ‘artificial intelligence’”. Machine learning can be defined as “a method of designing a sequence of actions to solve a problem, known as algorithms, which optimise automatically through experience and with limited or no human intervention”.

The skyrocketing volume of digital financial transactions around the world means that AI and ML are, by necessity, playing a greater role in enabling and managing them. From trading stocks to assessing mortgage applications, it is increasingly underpinning providers’ core operations. It is being applied in tandem with human expertise – with AI doing the heavy lifting and relying on humans when the algorithms are less confident – but this is a fine balance to strike.

In the financial sector, some degree of human intervention will often be necessary to ensure that applicable regulatory requirements are met, argue lawyers at Linklaters. The law firm put out a report on AI in financial services in September, ‘Artificial Intelligence in Financial Services: Managing machines in an evolving legal landscape 2.0’.

It found regulators across regions and sectors – including finance – are taking different approaches, and that certain areas of law, such as data protection law and competition law, take AI into account. The paper concluded: “AI is a constantly evolving disruptive technology posing novel ethical challenges and as a result, law and regulation have struggled to keep up with it.”

There are currently no international regulatory standards or guidance specifically on AI for the financial sector. Therefore, the FSI advises supervisors that existing international standards, guidance and national laws can be applied or used as starting points in dealing with governance issues associated with AI models. 

How AI is used

Risk managers were among the earliest adopters of AI, for tools that monitor, detect and manage risks including operational, market, credit or regulatory risk. Insurers use AI to improve the risk sensitivity of pricing as well as claims management, for example by streamlining payouts triggered by real world events. Customer onboarding and engagement providers increasingly use AI to verify customers and to talk to customers via chatbots: AI has been a game-changer for fraud detection. 

With incidents of identity fraud rising significantly, AI-based technologies can spot attempts to defraud businesses at pace and scale and adapt to ever more complex attempts. Mohan Mahadevan, senior vice president in Applied Science at ID verification platform Onfido, says: “AI is also being used offensively by fraudsters via techniques such as deepfakes, which adds to the problem and means AI is increasingly necessary to detect these complex attacks.” 

False positive alarms are a recognised problem for fraud detectors, with an industry average rate of 95%, says Imam Hoque, chief product officer at London-based start-up Quantexa. AI-driven forms of fraud detection can reduce false positives by more than 75%, says Mr Hoque, to generate more meaningful alerts for financial investigations, as well as reducing investigation times by up to 80%. 

“Existing traditional monitoring systems are internal looking, which only look at transactions, behaviors and individuals/businesses in isolation, resulting in missed risk,” he says. “Enhanced AI and machine learning in the form of contextual decision intelligence software is a developing area, proven to have the most potential for growth in the financial services.”

Human interaction is still essential when using AI, however. “Given the importance of bank accounts and other financial services to our everyday lives, the human role is vital in complementing AI by either following up with a customer to ascertain whether a transaction is legitimate or providing a second opinion if an algorithm has suspicions about fraudulent activity but is not 100% certain,” says Mr Mahadevan. 

Mechanisms for human intervention can take a number of forms: 

• human-in-the-loop: provides for human sign-off on every decision; 

• human-on-the-loop: provides for human intervention during the design phase and in the monitoring of the system’s operation; and

• human-in-command: provides for a human to oversee the overall activity of the system and decide if and how to use the system for a particular set of decisions.

Traders have long used rules-based algorithms and newer techniques including ‘algowheels’ that select between alternative trading strategies. Robo-advisers use rules-based algorithms but often create outputs that inform decisions ultimately made by humans. Asset managers also use AI techniques to support portfolio management, historically analysing past performance data but increasingly using other data sources and techniques. 

Asset manager Clare Flynn Levy founded Essentia Analytics as a way to apply next-generation technology and behavioural science to improve investor performance, after markets tanked post 9/11 and the dot com crash of 2000-01. “The obvious use of AI is to scan the infinite amount of information out there and make sense of it,” she says. “But AI can also be used to glean insight from smaller datasets – in this case, every trade that a fund manager does – which will be thousands or maybe hundreds of thousands, but not billions.

“That’s still enough. If somebody can give us at least 2000 trades we can find statistically significant patterns using machine learning techniques. ML cuts to the chase – it [crunches] all that data and lets us say to the investor: ‘These are the three things that matter in this data and tells them what to look for.’”

Potential AI risks

The FSI acknowledges that AI and ML have significant potential to improve the delivery of financial services and operational and risk management processes. It says: “Technology is now part and parcel of financial services and there is no question that it will continue to drive profound changes for consumers and financial institutions.”

Nevertheless, it warns of the potential to introduce or worsen risk exposures. Unintended bias or discrimination against certain groups of consumers can creep in. This is a problem for prudential supervisors when such risks translate into financial exposures for firms or if they give rise to large-scale operational risks, including cyber risk and reputational risk, cautions the FSI. AI and ML experts say that firms must protect their models because these are often trained on sensitive or regulated data, and any vulnerability of the model itself is a potential liability. 

Ellison Anne Williams, founder and chief executive officer at cryptography firm, Enveil, says: “Models can be reverse engineered to extract information about the organisation, including the data on which the model was trained, which may contain personally identifiable information, intellectual property or other sensitive material that could damage an organisation if exposed.”

Other risks identified by the FSI include prudential risks that can arise from large-scale underpricing of financial products or systematic errors in underwriting new financial consumers. It says: “Ultimately, safeguards must be in place to protect consumers’ interests and to maintain the safety and soundness of financial institutions.”

Financial regulators are starting to put in place or update specific regulatory frameworks on AI governance, as more financial institutions use AI. 

The EU has put forward a legislative proposal to harmonise AI rules. It identifies prohibited practices that contravene EU values, such as AI systems that manipulate people through subliminal techniques beyond their consciousness, result in social scoring for general purposes done by public authorities, and use remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement – except in certain limited situations. 

More recently, several financial authorities have initiated development of similar frameworks for the financial sector. Some common themes emerge with general guiding principles on reliability, accountability, transparency, fairness and ethics, while others relate to data privacy, third-party dependency and operational resilience.  

Yet most regulatory frameworks are still in the early stages of development, and range from application of existing principles-based corporate governance requirements in an AI context to practical non-binding supervisory guides on how to manage AI governance risks. The FSI says that, while such high-level principles are useful in providing a broad indication of what firms should consider when using AI technologies, there are growing calls for financial regulators to provide more concrete practical guidance. Regulators could compile emerging industry best practices on AI governance, where available, for each of these generally accepted principles, suggests the FSI.

The BIS body delivers a simple message to supervisors: the scale and speed of AI adoption warrants special regulatory attention. “Sound AI governance frameworks are increasingly important and financial innovation should not compromise the core mandates of financial sector supervisors,” it says.

More guidance needed

AI practitioners broadly agree that greater guidelines are needed, to keep up with the pace of change. Yin Lu is global head of product in the AI research and development lab at regtech firm Cube. She concurs that regulators should provide more details about best practices as they emerge. 

“Given how complex the high-level principles are – reliability, accountability, transparency, fairness and ethics – there is certainly no ‘out of the box’ solution,” she says. “We need to share examples of how the principles are being met in specific contexts, for specific use cases in specific domains, from mortgages to chatbots. And it’s very important to rank these use cases according to their impact, as not all require the same level of governance.”

AI and ML have helped fuel the switch to mobile banking, but this move from face-to-face to faceless banking brings responsibilities too, believes Michael Magrath, director of global regulations and standards at cyber security firm OneSpan. 

“It’s important that we continuously review and improve AI-driven decision-making to protect individuals from being treated in unfair ways,” he says. “This comes at a time when the UK government is looking to relax human checks on AI to help boost innovation. 

“However, we must continue to combat bias across its uses and that can only be done through human interaction, as the FSI highlights, because bias present in data ‘could be amplified by the AI algorithms’. Only with human interactions will we be able to identify and resolve forms of AI discrimination in finance and provide better services for customers.”

Guidelines are long overdue, say observers like Akbar Datoo, founder and chief executive officer at D2 Legal Technology. The company advises banks, asset managers and trade associations on the use of AI technologies. It helped Barclays use AI to review a large portfolio of more than 50,000 documents in 2018 and 2019, to identify and deal with contractual obligations that would be impacted by Brexit. 

“There are cases where the lack of explainability has meant the use of AI has resulted in significant mis-steps in regulatory reporting, difficulties in ensuring operational resilience through the manner in which vendors have been engaged in this area and ambiguity regarding regulatory expectations of controls and oversight of such technology,” says Mr Datoo.

These mis-steps look like minor infractions, however, when compared with the wider dangers of AI. Scientists have discovered that AI can behave in a manner eerily similarly to that of humans, requiring motivation and rewards to undertake lengthy tasks and finding ways to get rewards without completing them. Jonathan Este is associate editor at The Conversation, which publishes news authored by academic experts and researchers. He writes: “As we develop ever smarter AI, we need to ensure that we understand enough about its motivations and rewards to ensure it doesn’t opt for a simpler way of living that may not necessarily include us humans.”

Read Next:

surveillance
Opinion, Risk Management
March 27, 2024

The real blind spots in market abuse surveillance

Surveillance has become a box-ticking exercise, writes Simon Brady of 1LOD
Read more