The EU AI Act: what we know so far and key takeaways

The EU AI Act: what we know so far and key takeaways

On 8 December 2023, after 38 hours of intense final negotiations, the Council of the European Union and the European Parliament reached an historic, provisional agreement on laws to regulate the use of artificial intelligence in the EU (the "AI Act"). The AI Act marks the world's first comprehensive legal framework to regulate the use of AI, aiming to ensure that "AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values". This landmark deal signifies the EU's commitment to AI safety and puts it ahead of other countries such as the US and UK, which are yet to publish their own comprehensive legislation. China has developed its own approach to regulating AI.

The definitive version of the AI Act remains to be agreed. Work will continue at a technical level to finalise the details and text, which will then need to be confirmed by the Council and Parliament, and which is expected in early 2024. Until then, these are the key takeaways from the provisional agreement.

  • Overall: the AI Act takes a "risk-based" approach - the higher the risk, the stricter the rules. AI systems that pose an "unacceptable risk" will be prohibited, while stringent regulatory requirements will be imposed on "high-risk" AI systems. Conversely, "limited risk" and "minimal risk" AI systems are subject to simpler transparency obligations.
     
  • Definition of AI: the definition of AI to be incorporated in the AI Act tracks the OECD's definition: '… a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.' This definition is intended to remove from the scope of the AI Act "simpler software systems".
     
  • Scope: the AI Act will not apply to areas outside of the scope of EU law, for example, to restrict Member States' competence in national security. Systems used exclusively for military or defence purposes, solely for research and innovation, or for non-professional uses will be excluded.
     
  • Territoriality: the AI Act will apply extraterritorially, so organisations based outside the EU will be subject to the AI Act's restrictions where they market or deploy AI in the EU. The EU intends the AI Act to set a global standard for AI regulation, as GDPR has done, so advancing the EU's approach to regulating technology.
     
  • Prohibitions: some forms of AI will be banned entirely:
     
    • AI biometric categorisation systems that exploit sensitive characteristics (e.g., political, religious, race or sexual orientation),
    • "untargeted" scraping of facial images from online resources or CCTV to compile facial recognition databases,
    • AI that recognises emotions in the workplace and educational institutions,
    • social scoring systems based on social conduct or personal characteristics,
    • AI systems that manipulate human behaviour to circumvent their free will, and
    • AI deployed to exploit human vulnerabilities (e.g., age, disability, economic situation),
       

    with certain exemptions for law enforcement finally agreed after contested negotiations.

  • High-risk systems: various AI systems are identified as high-risk. These are systems that could cause significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law. They include AI systems used to influence the outcome of elections and voter behaviour – currently a topical concern, with examples of such uses now becoming more visible around the world. High-risk AI will be subject to extra protections, including compliance certification, and citizens' rights to raise complaints about AI systems and to receive explanations about decisions affecting their rights that arise from high-risk AI. Prior to the launch of high-risk AI systems, developers will have to undertake a "human rights impact assessment". The provisional agreement extends this rights impact assessment to the banking and insurance sectors. Another change to the original AI Act proposal is that there will be an exception for high-risk AI that does not, in practice, present a significant risk to safety or fundamental rights.
     
  • General purpose AI ("GPAI"): a suitable approach to regulating GPAI (which we assume to include generative AI foundation models such as ChatGPT) was a point of contention throughout the negotiations. Finally agreed in the concluding hours of negotiation, the AI Act proposes to incorporate additional obligations that will apply to providers of such models. There will be two levels of regulation here.
     
    • All GPAI systems, including the models on which they are based, will need to comply with transparency requirements, including the compilation and availability of underlying technical documentation, compliance with EU copyright law, and the publication of detailed summaries concerning the data used for training the models.
    • For powerful, "high-impact" GPAI that pose "systemic risks along the value chain", further obligations apply, including model evaluation, systemic risk assessment and management, undertaking adversarial testing, adequate cybersecurity for the software and hardware components of such GPAI systems, reporting serious incidents to the Commission, and monitoring energy efficiency. The tests of whether GPAI is "high-impact" are likely to include the total compute power used to train the model, the size of the training dataset, and the number of users.
       
  • Governance: the AI Act will be enforced by authorities designated within Member States. At an EU level, a new AI Office will be formed to coordinate and supervise governance and enforcement across the EU, including oversight and enforcement of the agreed GPAI regulation.
     
  • Enforcement: differing levels of fines will be implemented for violation of the AI Act. These range from the higher of:
     
    • 7% of annual global turnover or €35 million for prohibited AI infringements,
    • 3% or €15 million of annual global turnover for other violations (including those relating to high-risk systems), and
    • 1.5% or €7 million of annual global turnover for disclosing inaccurate information.

Many of the provisions in the legislation are not due to come into force until 2026 at the earliest. Before this date, certain provisions such as those regulating prohibited use AI and high-impact GPAI will likely come into force in late 2024 and mid-2025 respectively. The provisions concerning governance and conformity bodies are likely to come into force in mid-2025, too. In the meantime, the European Commission plans to launch a voluntary "AI Pact". This will encourage developers to commit to "key obligations of the AI Act ahead of legal deadlines".

Authors

Katie Hewson
Jenna Franklin
Boriana Guimberteau
Simon Bollans
Isabella Clark