AI update – regulatory approach to AI in financial services

AI update – regulatory approach to AI in financial services

In response to the government's White Paper on its approach to regulating artificial intelligence and machine learning ("AI"), and call for action to the regulators, the Financial Conduct Authority ("FCA") published its AI Update on 22 April 2024. On the same date, the Bank of England ("BoE") and the Prudential Regulation Authority ("PRA"), issued a joint letter setting out their approach.

Overview

In February 2024, the government issued a response to its consultation on its AI White Paper, in which it set out a "pro-innovation approach to AI regulation" and a light touch regulatory framework based on five cross-sectoral principles ("Principles" – see out 2023 insight on the White Paper for details). This framework is (at least for now) in lieu of a standalone piece of cross-sectoral AI regulation, and the government tasked key sectoral regulators to publish updates by the end of April 2024 setting out their approach, capability and proposed actions to ensure they are able to effectively implement the Principles and regulate the use of AI in their domain.

The FCA is clear that it is a "technology-agnostic, principles-based and outcomes-focused regulator", and is focused on how firms can safely and responsibly adopt AI technology. Whilst recognising the need to monitor and adapt their approach as needed, neither the FCA, BoE, or PRA believe that further regulation is needed at this stage.

Alignment to the five AI governing principles

The FCA AI Update, and the letter from the BoE and PRA, welcome the government’s principles-based, sector-led approach, and set out (albeit at a high level) how they map the Principles to their current regulatory controls, codes of practice/ principles, and governance requirements.

The regulators take the view that their frameworks are already appropriate to support the adoption, use and further innovation of AI, in line with the White Paper Principles. In practice however, their vagueness and lack of granularity on what firms need to do to comply, is likely to cause headaches for compliance professions (internal and external) and Senior Managers and others directly under the regulatory spotlights (more on this in our takeaway thoughts below).

In their respective responses, the FCA and PRA summarised how they view their existing regulatory frameworks address the Principles. A high-level matrix of their views is set out below.

Principle

 

FCA Framework

PRA's Framework

Safety, security, robustness

 

Principles for Business: e.g. firms must conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems

 

Threshold Conditions: e.g. requirement that a firm’s business model must be suitable (including in relation to being conducted in a sound and prudent manner)

SYSC sourcebook: e.g. provisions related to risk controls under SYSC 7 as well as general organisational requirements under SYSC 4, and specific rules and guidance on outsourcing in SYSC 8 and 13.

 

PRA SS2/21 (Outsourcing and third party risk management): e.g. to manage risks from suppliers, where third-party businesses support important business services and controlling specific risks from cloud computing.

Fairness

 

Consumer Duty: e.g. delivering good outcomes for retail customers and to act in good faith and avoid causing foreseeable harm.

 

Principles for Business: e.g. to pay due regard to the interests of customers and treat them fairly, and communicate information in a way that is clear, fair and not misleading.

Equality Act 2010: e.g. which impacts the BoE when carrying out a public function (inc. monetary policy, prudential regulation, policymaking and supervision).

Appropriate transparency and explainability

 

Consumer Duty: e.g. honesty, ensuring fair and open dealings with retail consumers.

 

Principles for Business: e.g. to pay due regard to the information needs of clients and communicate with them in a way that is clear, fair and not misleading.

PRA SS1/23 (Model Risk Management principles): e.g. to consider the explainability and transparency of an AI model

Accountability and governance

 

Senior Management Arrangements: e.g. ensuring the Chief Risk function has responsibility for overall management of the risk controls of a firm, including the setting and managing of its risk exposures.

 

SYSC sourcebook: e.g. to have robust governance arrangements, which include a clear organisational structure with well defined, transparent and consistent lines of responsibility

 

Consumer Duty: e.g. to ensure that their obligation deliver good outcomes for retail customers is reflected in their strategies, governance and leadership.

Senior Managers and Certification Regime (SM&CR): e.g. to ensure that one or more Senior Managers has overall responsibility for the main activities, business areas, and management functions of a firm.

 

PRA Rulebook: e.g. establishes high-level requirements on governance, and rules regarding risk management and controls

 

PRA SS1/23: e.g. ‘strong governance oversight with a board that promotes a ‘Model Risk Management’ culture from the top through setting clear model risk appetite.

Contestability and redress

 

Complaints’ Sourcebook (DISP): e.g. rules and guidance detailing how firms should deal with complaints, and redress schemes.

FCA and ICO: The PRA sees this principle largely sitting with consumer-facing regulators, such as the FCA and ICO.

 
Earlier publications by the UK financial sector regulators are actually more enlightening on their views of the specific risks of AI in this sector, such as the discussion paper, DP5/22 – Artificial Intelligence and Machine Learning, published by the BoE, PRA and FCA on 11 October 2022 (which for example, goes into useful detail on, among other concerns, the interaction of AI and machine learning with the FCA's Principles for Businesses, the Consumer Duty, the Consumer Protection from Unfair Trading Regulations 2008, and the Equality Act 2010).

Next steps for the next twelve months

The regulators will continue to monitor the adoption and impact of AI across UK financial markets, using various sources of data and intelligence, such as the AI Public-Private Forum, the Digital Regulation Cooperation Forum ("DRCF") and the Emerging Technology Research Hub. They will, no doubt, be proactive to identify and mitigate any potential harms or risks to consumers and markets arising from the use of AI, as well as to understand the opportunities and challenges for beneficial innovation.

Focusing on the FCA's proposed actions over the next twelve months, its plans include:

  • Continuing to further its understanding of AI deployment in UK financial markets by conducting diagnostic work, re-running a machine learning survey (jointly with the BoE), and collaborating with the Payment Services Regulator across systems.
  • Building on existing foundations of their regulatory framework and considering future adaptations if needed, especially in relation to resilience, outsourcing and critical third parties.
  • Collaborating with other regulators, both domestically and internationally, to share insights and best practices on AI regulation.
  • Testing for beneficial AI by working with DRCF member regulators to deliver the pilot AI and Digital Hub, exploring changes to their innovation services, and assessing opportunities for their AI Sandbox.
  • Using AI in their its regulatory activities by investing in advanced models to detect fraud, scams and market abuse, and exploring potential use cases involving natural language processing, synthetic data and large language models.
  • Looking towards the future by conducting research on emerging technologies, such as deepfakes and quantum computing, and responding to the data asymmetry between Big Tech and traditional financial services firms.

It is therefore imperative that firms keep abreast of the developments and standards on AI regulation, and collaborate with other stakeholders, such as industry peers and professional advisors to share best practices and promote beneficial and responsible innovation in AI. It is also worth tracking, for their likely direction of travel, informal pronouncements made by senior staff at the regulators (for example, the speech that Nikil Rathi, FCA Chief Executive, gave at The Economist in July 2023, Our emerging regulatory approach to Big Tech and Artificial Intelligence).

Our takeaway thoughts

The first task for firms is to identify the AI systems and processes being used either internally (e.g. in HR functions), or externally (e.g. in customer-facing operations and decision-making) especially those AI systems and tools that are embedded in legacy systems – in other words, where they are less visible, but may still cause firms to breach existing laws and regulations. In our experience, many firms have failed, and continue to fail, to do this.

Whether for legacy AI tools or new systems, firms should ensure that their use of AI is consistent with the FCA's guiding principle of consumer protection, and with FCA/PRA objectives of market integrity and effective competition. This requires actions to ensure that they comply with the existing regulatory frameworks (including those mentioned above), and also consider how their use of AI aligns with the Government's five AI Principles. That, to us, seems easier said than done.

Whilst many businesses have supported the "principle led" approach across sectors, with the anticipation that it will be flexible and non-prescriptive and fit for an evolving AI landscape, other businesses and stakeholders will be less optimistic that they understand and can define the regulatory goal posts. In particular, with the diverging approach under the EU Artificial Intelligence Act (which sets out detailed, prescriptive requirements), and potential competing requirements from UK regulators, businesses will need to clearly map out the complex regulatory matrix as it applies to them in practical and operational terms. Including on how this impacts their supply chains.

The cross-border and cross-sectoral nature of AI and its interactions with other emerging technologies, may also raise issues of coordination and cooperation with other domestic and international regulators and stakeholders, as well as potential regulatory arbitrage or fragmentation.

With AI developing at pace across the world, and a potential change in UK government on the horizon, the direction of travel may also change, as seems to be the current AI strategy in the UK. Watch this space.

Key contacts

Our experts at Stephenson Harwood are well placed to assist regulated firms understand how to meet their obligations when developing and deploying AI into their operations and supply chains.