In our previous posts, we provided a brief recap of the status of the proposed AI Act and the AI Liability Directive and our list of the three things that every business should know about how the EU is regulating AI.
As promised, we are now providing our list of the three reasons why it is crucial for businesses to know how the EU is regulating AI.
SPOILER ALERT: Whether or not your business falls under the application of the AI Act, the European regulation of AI will nevertheless be relevant for you as—similarly to what happened with the GDPR—it will likely become a model for defining other legislation in other jurisdictions. This is in fact the very first reason for monitoring developments of the EU regulation on AI, which may also impact countries outside the EU’s borders.
Stay tuned if you want to stay in the loop on AI in the EU (and on many other things!), and don’t hesitate to contact us with any questions!
- It’s more likely than not that your business will fall within the scope of application of the AI Act.
The AI Act will apply to AI systems providers (and their authorized representatives), importers, distributors and users, as well product manufacturers that place an AI system on the market or put one into service together with their product and under their own name or trademark.
Consequently, the AI Act will affect almost everyone that places on the market or puts into service in the EU an AI system (whether they are physically present or established in the EU or in a third country), or that uses in the EU the output of an AI system.
The obligations that providers must comply with will vary, based on the level of risk carried by the AI system. Consistently with the based-risk approach embraced by the AI Act, the higher the risk, the stricter the requirements.
Most rules will apply to high-risk AI systems and their providers, but there are also provisions that will apply to providers and users of low- or minimal-risk AI systems. Moreover, the criteria to determine when an AI system is high-risk will vary in time, as the Commission is empowered to adopt delegated acts to add high-risk AI systems, provided that certain conditions are met.
Additionally, users may become subject to the obligations of providers, under certain conditions.
- Fines for noncompliance with the AI Act will be even higher than under the GDPR.
Although the AI Act empowers member states to lay down their own rules on penalties—including administrative fines—for infringements of the AI Act, it also provides for certain administrative fines for the most severe breaches:
- Fines of up to €30 million or, if the offender is company, up to 6 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases of noncompliance with the prohibition of artificial intelligence practices (e.g. the placing on the market or putting into service a prohibited AI system);
- Fines of up to €20 million or, if the offender is a company, up to 4 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases of noncompliance with certain obligations for high-risk AI systems and general purpose AI systems, as well as with the transparency obligations for low or minimal risk AI systems;
- Fines of up to €10 million or, if the offender is a company, up to 2 percent of its total worldwide annual turnover for the preceding financial year, whichever is higher, will apply in cases where incorrect, incomplete or misleading information is provided to notified bodies and national competent authorities in reply to a request.
For small and medium enterprises and startups, these fines will follow the same risk categories as for large companies and will amount to up to 3 percent, 2 percent and 1 percent, respective to the severity of the offence.
- Noncompliance with the AI Act increases the risk that your business will be held liable for damage caused by the AI system.
Among other things, the AI Liability Directive uses the same definitions as the AI Act and makes the documentation and transparency requirements of the AI Act operational for liability—through the right to disclosure of information (i.e. failure to comply with an order for disclosure will imply application of the rebuttable presumption of noncompliance with a relevant duty of care).
Furthermore, failure to comply with the requirements of the AI Act for high-risk AI systems will constitute an important element triggering the rebuttable presumptions provided under the AI Liability Directive.
This piece is based on the general approach of the Council on the AI Act on 6 December 2022, and on the AI Liability Directive on 28 September 2022.