In our previous post, we provided a brief recap of the status of the proposed AI Act and the AI Liability Directive. As promised, below is our list of the three things every business should know about how the EU is regulating AI.
Our list of the three reasons why it is crucial for businesses to know how EU is regulating AI will follow shortly.
Follow us to stay updated, and don’t hesitate to contact us with any questions!
- What is the relationship between the AI Act and the AI Liability Directive?
The AI Act and the AI Liability Directive support, complement and reinforce each other.
As clarified in the explanatory memorandum to the AI Liability Directive, “safety and liability are two sides of the same coin.”
Indeed, the AI Act guards the safety side, providing for safety-oriented rules aimed at reducing risks and preventing damage, as well as requiring the establishment of a quality management system, various documentation, a conformity assessment procedure, registration obligations, cooperation and information duties, human oversight mechanisms and a post-market monitoring system.
As risks cannot be eliminated in their entirety, liability rules are needed to ensure that individuals can obtain effective compensation in the event of damage caused by AI systems. Such liability rules can be found in the AI Liability Directive, which aims to provide tools to overcome the difficulties faced when trying to prove the causal link between the fault of the defendant and the output of the AI system causing damage.
Difficulties in determining liability may arise from the complexity, autonomy and opacity of certain AI systems as, due to such features, explaining the inner functioning of the AI system may be very difficult in practice (the “black box effect”).
- What is the approach taken by the EU to regulate AI?
The AI Act follows a risk-based approach, which classifies AI systems into (now) five categories:
- Prohibited AI systems
- High-risk AI systems
- Low-risk AI systems
- Minimal-risk AI systems
- General-purpose AI systems (newly added).
As the risks increase, so do the measures to be taken: The highest level of risk triggers a ban on the use of the AI systems, while for less risky AI systems the focus is on transparency obligations aimed at ensuring that users are aware that they are interacting with an AI system (and not with a human being).
Most obligations of the AI Act apply to high-risk AI systems. According to the general approach of the Council, high-risk AI systems are products or safety components of products that are subject to a third-party conformity assessment before being placed on the market or put into service, or they are AI systems intended to be used for certain purposes identified by the AI Act.
Moreover, general-purpose AI systems that can be used as high-risk AI systems, or as components of high-risk AI systems, have similar obligations to those provided for high-risk AI systems.
As for the AI Liability Directive, it does not provide for liability rules, nor does it aim at harmonizing general aspects of civil liability (e.g. the definition of fault and causality). Those will remain regulated in different ways by national laws.
The proposed AI Liability Directive introduces a rebuttable presumption of the existence of the causal link between fault and damage, under certain conditions. In very brief terms, the national court can presume that the fault caused the damage if the claimant can prove that (i) someone was at fault for not complying with a certain duty of care relevant to the damage, (ii) the output of the AI system or the failure of the AI system in providing an output gave rise to the damage, and (iii) it is reasonably likely, based on the circumstances of the case, that the fault has influenced the output or the failure of the AI system to produce the output,
The AI Liability Directive also empowers national courts to order the disclosure of information about high-risk AI systems where the damaged party has taken all proportional attempts at gathering the relevant evidence for supporting the claim for compensation.
- When will the AI Act be applicable?
Adoption of the general approach allows the Council—once the European Parliament adopts its own position—to enter into negotiations with the European Parliament with a view to reaching an agreement on the proposed regulation. This will still take some time.
Once ready for publication in the Official Journal, the final version of the AI Act will apply in all EU member states 36 months after its entry into force, except for (i) the rules on AI governance systems at European and national levels and (ii) the rules on penalties for breaches of the AI Act, which will apply 12 months after the entry into force of the AI Act. Specific provisions describe cases where the AI Act shall or shall not apply to those AI systems already placed on the market or put into service before the date of application of the AI Act.
***
This piece is based on the European Council’s general approach on the AI Act, dated 6 December 2022, and on the AI Liability Directive as issued on 28 September 2022.