EU Artificial Intelligence Liability Directive

EU Artificial Intelligence Liability Directive

Exploring the legislative landscape for the proposed EU Artificial Intelligence Liability Directive

As artificial intelligence ("AI") increasingly permeates the way we live, work and interact, the probability of loss and damage being caused to individuals and organisations by the use of AI is likely to increase. Those suffering loss and damage will likely seek compensation, and there are concerns that current liability rules are deficient in many jurisdictions.

This insight considers the proposed new liability regime in the EU and how this seeks to address fault-based liability in relation to the use of AI.

Background

On 28 September 2022, the European Commission (the "Commission") introduced the draft Artificial Intelligence Liability Directive ("AI Liability Directive"). The AI Liability Directive proposal aims to "adapt private law to the needs of the transition to the digital economy" and make it easier for claims to be brought for harm caused by AI systems and the use of AI. The proposal addresses the specific issues with causality and fault linked to AI systems and ensures that claimants suffering loss in fault-based scenarios will have recourse to damages or other appropriate remedies.

Effective date

Currently unknown: The draft AI Liability Directive still needs to be considered by the European Parliament and Council of the European Union. Once negotiated and adopted, EU Member States will be required to transpose the terms of the AI Liability Directive into national law, likely within two years.

Interaction with other EU proposals

The AI Liability Directive is intended to complement the European Commission’s proposed Regulation on Artificial Intelligence (the "AI Act"), which will classify AI systems by risk and regulate them accordingly. It will also sit alongside proposals for a new Directive on Liability for Defective Products ("Revised PLD"), which will update the EU's product liability framework to better reflect the digital economy and will explicitly include AI products within its scope.

Summary overview

At a fundamental level, the AI Liability Directive is intended to make it easier to bring claims for harm caused by AI:

  • courts will be able to compel providers of AI systems to give relevant evidence to claimants about systems that are alleged to have caused damage; and
  • if certain conditions are met, there will be a rebuttable presumption of causation between the defendant’s fault and the damage caused by the AI system.

Deep dive

What is its primary purpose?

The proposed AI Liability Directive is part of a broader package of EU legal reforms aimed at regulating AI and other emerging technologies. As currently drafted, the AI Liability Directive is aimed to achieve principally three things:

  1. reducing legal uncertainty surrounding liability claims and AI-related damages;
  2. ensuring that victims can seek effective redress for AI-related damages; and
  3. harmonising certain rules across Member States and bringing national liability rules up-to-date.

Who does it apply to?

The AI Liability Directive will apply to providers, operators and users of AI systems, with these terms to have the same definitions as in the draft EU Artificial Intelligence Act ("AI Act").

Territorial scope?

The AI Liability Directive has extraterritorial effect, and broadly captures providers and/or users of AI systems that are available or operate within the EU.

Key provisions / requirements

The key provisions of the AI Liability Directive are:

  1. Lowering the evidentiary hurdles for victims injured by AI-related products or services and making it easier for victims to successfully establish claims against AI operators, providers or users.
  2. Introducing measures to empower courts in EU Member States to compel the disclosure of evidence related to AI systems in certain situations. The AI Liability Directive would allow national courts to compel providers of high-risk AI systems (as defined under the AI Act) to give relevant evidence to potential claimants about a specific system that has been alleged to have caused damage. This rule will apply if the claimant: (i) presents sufficient facts and evidence to support the claim for damages; and (ii) shows that they have exhausted all proportionate attempts to gather the relevant evidence from the defendant
  3. Allowing claims to be brought by a subrogated party or a representative of a claimant, including by class action.
  4. Introducing a presumption of causation between the defendant’s fault and the damage caused to a claimant by the AI system. This presumption would apply if all of the following three conditions are met:
    • the claimant has shown that the defendant failed to comply with a duty of care intended to protect against the damage that occurred, including a failure to comply with relevant obligations under the AI Act;
    • it can be considered reasonably likely, based on the circumstances of the case, that the fault influenced the output produced by the AI system, or the AI system’s failure to produce an output; and
    • the claimant has shown that the output of the AI system, or the AI system’s failure to produce an output, gave rise to the damage.

The AI Liability Directive does not, as currently drafted, address situations where an AI system causes damage but there is no obvious defective product or fault by either the provider or user. However, the European Commission will assess the need for no-fault strict liability rules five (5) years after the entry into force of the AI Liability Directive.

Regulatory supervision

As the AI Liability Directive proposes civil liability rules, there are currently no apparent plans for any regulatory supervision, other than those already set out in the draft AI Act.

Enforcement

The AI Liability Directive will not apply retrospectively and will only apply to causes of action occurring after its (likely two-year) implementation period has expired.

What you should be doing now in preparation

The AI Liability Directive underscores the vital importance of compliance with the AI Act.

In preparation, organisations are advised to conduct thorough risk assessments, examining whether their AI use cases are likely to fall under the purview of the proposed AI Act and, if so, whether they might be categorised as 'high-risk' AI systems. Once these risk assessments are completed, appropriate governance and policies should be put in place to limit the risks of damage being caused by the AI systems due to the incorrect use by, or inaction of, the organisation and its employees.

Businesses should also consider how they will be able to comply with potential disclosure requests, and what this information would look like. For complex AI systems, this should be factored into the development roadmap and governance structure well ahead of the AI Act or AI Liability Directive taking effect.

Outside of compliance with the AI Act and AI Liability Directive, businesses should also ensure that they have appropriate contractual protections in place in relation to the use of AI systems (in particular appropriate warranties and indemnities to cover potential risks when procuring AI systems).

Our concluding thoughts

The AI Liability Directive seemingly equates the responsibilities of AI system users and providers to those typically associated with more tangible technologies like industrial machinery. In other words, it proposes that an AI system user will be considered at fault if they misuse the system or disregard instructions. Similarly, a provider would be at fault if the AI system was improperly designed or developed, or if corrective measures were not taken to address identified defects.

However, given the intrinsic complexity and opacity of AI systems—some of which are colloquially termed "black boxes"—the efficacy of this approach in addressing an ever-evolving technological landscape remains uncertain.

Importantly, the AI Liability Directive does aim to establish a clear cause-and-effect relationship between the actions of an individual or organisation and the damage attributable to the AI system in fault-based scenarios. This means, despite its potential shortcomings in addressing complex liability questions, the introduction of these rules would ensure that victims suffering loss or damage can find redress in the most straightforward cases.

Nonetheless, the AI Liability Directive may leave a gap in scenarios where a claimant cannot establish a clear link between the damage caused by the AI system and the defendant's fault—especially when the AI system operates as designed but still results in damage. This exclusion appears deliberate, as the Directive’s draft proposals reference the European Commission's plan to reassess the need for non-fault-based liability rules five years after the AI Liability Directive's adoption.