The UK's White Paper on AI regulation: a pro-innovation approach

The UK's White Paper on AI regulation: a pro-innovation approach

Is the balance right with the UK's pro-innovation AI White Paper?

Recently, discussions on the race to develop Artificial Intelligence ("AI") – ranging from Elon Musk and Steve Wozniak's proposed six-month moratorium on training AI systems to Bill Gates' call for establishing "rules of the road" so that the benefits of AI exceed its risk appetency – have dominated international debate. Against this backdrop, the UK Government ("Government") set out its current position in the race by publishing a white paper titled "A pro-innovation approach to AI regulation"  ("White Paper"), detailing a proposed 'light touch' framework which seeks to balance regulation with spurring responsible AI innovation. However, more recent announcements by the Prime Minster, Rishi Sunak, may result in a re-assessment of this strategy, largely driven by the accelerated growth of generative AI and an awakening to the significant impact this may on our lives and the economy.

This insight will provide an overview of the features of the proposed framework set out in the White Paper (the "Framework").

Background

The White Paper is based on the original premise that a principle based framework will ensure that the "…UK [is] on course to be the best place in the world to build, test and use AI technology…" (Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology). Fundamentally, it is based on existing regimes – it is in lieu of new legislation, and it does not amend the scope of existing legislation which relates to AI (e.g. data protection laws). According to the Government, the use of existing legislative regimes (coupled with proportionate regulatory intervention) will result in a future-proof framework that can be adapted according to AI trends, opportunities and risks. Whether this approach is retained after the current consultation remains to be seen.

Diverging approach – UK vs EU

In contract to the light touch principles based approach in the White Paper, the EU is over 2 years into the process of agreeing a detail and prescriptive AI regulation (the "AI Act"). The AI Act is designed to regulate AI systems based on their level of risk to humans, prohibiting the use of particularly harmful AI systems, introducing stringent controls for "high-risk" AI, and imposing moderate transparency requirements for "low-risk" AI. Its granularity and detail is very similar to that under the General Date Protection Regulation.

Irrespective of the path ultimately to be taken by the UK, there are separate regimes on the horizon for the UK, EU and the rest of the world. Therefore, it is critical that businesses developing and offering AI products and systems, and those implementing AI into their operations, are aware of how the proposed rules will apply. Understanding the diverging regimes will be important in developing commercial approaches and preparing for compliance.

Deep dive

What does the Framework aim to achieve?

In light of the UK's ambition to become a science and technology superpower, the Framework aims to be an instrumental tool in "…getting regulation right…" so that international businesses feel confident in investing, and retaining their investment, in the UK. It aims to increase prosperity and growth in AI markets by removing barriers to innovation, augment public trust in AI systems to drive up AI adoption, and reinforce the UK's position as a global leader in AI.

The White Paper also notes that the Government wants to ensure that UK businesses benefit from global AI opportunities by managing cross-border risks in AI supply chains.

What is the territorial scope and ambition of the Framework?

The Framework will apply to those developing, deploying and using AI systems across the UK, irrespective .

What doesn't the Framework cover, and why?

The Framework does not address all societal and global challenges associated with using and developing AI systems, such as data access and sustainability.

The Framework does not cover the allocation of liability during the AI life cycle. According to the White Paper, it would be premature at this stage to conclude on the topic of liability "…as it's a complex, rapidly evolving issue…", which requires careful manoeuvring so as to not disrupt the UK's AI ecosystem.

What AI does it apply to?

The Framework does not provide an oven-ready definition of AI. Instead, it describes AI by reference to "adaptivity" and "autonomy", features that are baked into the functionality of AI systems. It is designed to regulate the use – or in other terms, the outcomes – of AI systems, as opposed to regulating the technology itself. In summary:

  • Adaptivity refers to AI systems' ability to perform new forms of inference that are not predicted by their human programmers, in turn making it challenging to explain the logic of the AI systems' output.
  • Autonomy refers to AI systems' ability to make decisions without the direct intent or control of a human, leading to difficulties in allocating responsibility for AI systems' resulting outcomes.

Whilst this approach should avoid the application of a rigid definition, such a broad and flexible approach may lead to inconsistencies between regulators who could, in pure isolation, interpret the features according to the specificities of their respective industries. The Government recognises this potential pitfall and has put forward a concoction of supporting inter-regulator coordination and centralised monitoring.

A principles-based approach

The Framework is based on a set of cross-sectoral principles ("Principles") that aim to encourage responsible AI design, development and use. The use of these Principles is intended to deliver a consistent and proportionate application of the Framework, whilst affording regulators with a degree of flexible interpretation.

Since our insight on the DCMS AI Policy Statement, the Principles embedded in the Framework have been "…updated and strengthened…". The Principles now consist of:  

  • Safety, security and robustness: AI systems should operate "…in a robust, secure and safe way throughout the AI life cycle…", especially in light of the autonomous nature of AI decision-making. Risks present at each stage of the AI life cycle should be spotted, assessed and managed.
  • Appropriate transparency and explainability: Transparency is defined as the provision of appropriate information on AI systems (e.g. the purpose of the AI system, how and when it will be used) to the relevant parties. Explainabilty relates to a relevant party's ability to access and understand the decision-making rationale of an AI system.
  • Fairness: Fairness pertains to protecting the legal rights of individuals and organisations. AI systems should not weaken these legal rights, nor should they result in discriminatory market outcomes. For example, errors in an AI-generated credit score can negatively affect an individual's livelihood.
  • Accountability and governance: Governance measures should be implemented to oversee the supply and use of AI systems, and lines of accountability should be clearly demarcated throughout the AI life cycle.
  • Contestability and redress: Affected third parties and actors within the AI life cycle should be able to make a complaint about or contest AI that creates harm or a material risk of harm.

Who will regulate compliance?

Instead of establishing a standalone body for AI regulation, the White Paper proposes to enhance the remit and capacity of existing regulators to develop a sector-specific, principle-centered approach. Authorities such as the ICO, CMA, FCA, Ofcom, Health and Safety Executive, and the Human Rights Commission will need to adhere to Principles to foster trust and clarify guidelines for innovation.

Regulatory coordination, collaboration and centralisation

As the Framework is in lieu of a standalone piece of cross-sectoral AI regulation, there is uncertainty as to how the Framework will effectively work within the perimeters of "… a complex patchwork of legal requirements…". Without coordination, regulatory burdens on businesses could grow, leading to small players struggling to compete, market and public confidence in AI deteriorating, and innovation being stunted. In order to overcome this, the Government has proposed greater coordination at the central and regulator level to ensure that the Framework does function in a "cross-cutting, principles-based" manner.

The government plans to offer centralized support for monitoring and evaluating the new AI regime, identifying barriers and inconsistencies, predicting emerging AI risks, fostering AI-focused sandboxes, promoting AI education for businesses and consumers, and maintaining compatibility with international frameworks. Although it's unclear which entity will fulfil this role, initial indications suggest responsibility will sit with governmental, potentially evolving into an independent regulator dedicated to AI in the future.

Adaptability

In practice, not all of the Principles will be relevant to a particular context and in some instances the Principles may come into conflict. In instances of conflict, regulators will be able to prioritise certain Principles in line with the White Paper's context-driven approach. The Government may adapt the Framework in the future should regulators find certain Principles irrelevant.

The Government may also adapt the Framework in light of the fact that some sectors, such as the AI-enabled military sector, already have their own principles which go beyond the scope of the Principles (e.g. The Ministry of Defence published its own AI strategy in June 2022).

Complementary tools

The Framework will not operate in isolation, it puts forward a range of complementary tools:

  • Regulatory sandbox for AI: a one-stop-shop for regulators to test the Framework, identify technology and market trends that may change the Framework, and assist innovators in getting their products to the market quicker.
  • Assurance techniques: processes such as impact assessments, audits and performance testing will be used to assess and determine AI systems' trustworthiness.
  • Technical standards: standards that can be applied uniformly across sectors will be used, such as safety and robustness, bias, and risk management. Regulators may also use these standards as a benchmark, and integrate them into sector-specific guidance.

Next steps in the consultation  

The consultation under the White Paper is now closed (21 June 2023) and within the next 6 months we should expect a more detailed response and potential guidance on the implementation of the White Paper principles and proposed framework. However, there is significant scope for the approach to change, and plenty of political talk of more internationally coordinated approaches.

Other regulatory developments to be aware of

A number of other regulatory developments in the UK, the EU and other jurisdictions will also affect the development, use and rollout of AI systems. Impacted businesses must therefore understand how these may affect their current and future AI endeavours. These developments include (but are not limited to):

  • The EUs AI regulation setting out the blocks current detailed framework for the regulation of AI. 
  • The EU's Data Governance Act – which will provide more opportunities and structure in regard to data sharing, and also relevant parts of the Digital Services Act, Data Act and Cyber Reliance Act.
  • The EU's Artificial Intelligence Liability Directive – an EU proposal that will provide uniformity in rules for non-contractual civil liability for damage caused with AI involvement. 
  • The UK's Data Protection and Digital Information Bill - set to change (among other things) the rules on how personal data is processed by automated systems automated processing regulation.  Our data protection bulletin (published regularly on our hub here) is tracking the progress of this new legislation.
  • Canada's proposed Artificial Intelligence and Data Act – which would establish common requirements for the design, development, and use of artificial intelligence systems, including measures to mitigate risks of harm and biased output. It would also prohibit specific use of AI systems that may result in serious harm to individuals or their interests.
  • US State Bills and Federal AI laws – there are a number of laws and regulatory orders on the cards in the US, including the Federal Trade Commission's expansion of its rulemaking into AI enforcement. To date there have been over 40 bills introduced across US States that would regulate in some way the use or deployment of AI. In June 2023, U.S. senators introduced two separate bipartisan artificial intelligence bills amid growing interest in addressing issues surrounding AI technology.

AI Providers must also consider non-legal regulation which may influence AI Systems such as AI assurance frameworks, and regulatory guidance.

What you should be doing now in preparation

Whilst the regulatory frameworks in the UK and EU, and around the world, are yet to be finalised, there is sufficient information and common themes for organisation to implement steps to prepare for new requirements that lay ahead. In fact, making headway on these now will almost certainly ease the compliance burden down the line. Actions to consider include:

  • develop an AI asset register – knowing where and how AI is used in your organisation will enable you to assess risks and implement policies and compliance requirements;
  • implement policies that govern how AI should be developed or implemented. These should address how to manage risks such as bias, dataset integrity, and transparency;
  • carry out and document risk assessments, in particular where the AI could present higher risks. Start building these into standard operating procedures (in the same way data protection impact assessments are implemented);
  • establish an AI governance body – if AI is, or will, form a key part of your product portfolio, or internal operations, establish a governance structure to guide the business and act as the gatekeeper; and
  • track legislative progress so you keep up to date with developments. When AI is becoming deeply embedded in your organisation, you need to plan well ahead. 

Our concluding thoughts

Whilst the Framework's pro-innovation stance is intended to provide flexibility in how the use of AI is controlled in the UK, the fact that the Framework has to operate within a patchwork of regulations may create gaps that are in practice, too burdensome and complicated to effectively fill. The White Paper's inclusion of complementary tools and centralised functions appear to be helpful in addressing this concern (if implemented properly by regulators), however such a wide range of regulator tools and partly centralised mechanisms might lead to confusion if coordination and collaboration are not encouraged to the extent required to affect any meaningful change on the UK's AI regulatory landscape.

We eagerly await the outcome from the consultation on the White Paper and look forward to providing an update on the UK's approach in due course.