Here we are: right after Christmas, and ready to start afresh for 2023.
What did you wish to find under your Christmas tree?
Those of you who replied “the final version of the EU AI Act.”: well, you were probably disappointed. And it isn’t your (or Santa’s!) fault.
Not getting your AI Act for Christmas isn’t about whether you were naughty or nice—regulating AI is no easy task. It requires ensuring safety and fundamental rights protection without stifling innovation. And indeed, as soon as the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on AI (AI Act) was issued on 21 April 2021, it launched a debate, and many amendments have been discussed aiming at finding the very best balance between regulating without over-regulating.
It’s no coincidence that when, on 6 December 2022, the Council adopted its common position (general approach) on the AI Act, the Czech Deputy Prime Minister for digitalization and minister of regional development Ivan Bartoš stressed: “Artificial Intelligence is of paramount importance for our future. Today, we managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe. With all the benefits it presents, on the one hand, and full respect of the fundamental rights of our citizens, on the other.”
The debate is nevertheless still quite far from being resolved. According to recent news, the European Parliament has already started its analysis and review of the Council’s general approach and is ready to propose and discuss further amendments (under to the trilogue process), namely on the criteria to determine whether an AI system is high-risk and on the revision powers of the Commission regarding these criteria, while provisions on general purpose AI systems will be discussed at a later stage.
The recent proposal for a directive of the European Parliament and of the Council on adapting noncontractual civil liability rules for artificial intelligence (AI Liability Directive), issued on 28 September 2022, will also have to be examined and agreed by European institutions.
The AI Liability Directive is intended to ensure that people enjoy the same level of protection as in cases that don’t involve AI systems (thereby also contributing to strengthening trust in AI and encouraging AI uptake in the EU). It provides for rebuttable presumptions and disclosure obligations to ease—without reversing—the burden of proof without exposing providers, operators and users of AI systems to higher liability risks, which could hamper innovation.
Even if these tools (i.e. rebuttable presumptions and disclosure obligations) were chosen as the least interventionist ones, concerns have already been raised as to whether they are concretely suitable to find an effective balance between the interests of victims of harm related to AI systems and the interests of businesses active in the sector.
We have been discussing the AI Act and the AI Liability Directive with clients and friends, who did not deny their fear of getting lost in a sea of regulations, considering that the EU is currently issuing numerous rules of paramount importance for the digital world (the DMA, the DSA, the DGA and the Data Act for starters). Thus, with all these rules, why stay abreast of developments of the AI Act?
Don’t worry: we’ve got you covered. In our next posts, we will provide you with our list of the three things every business should know about how the EU is regulating AI and our three reasons why it is crucial for businesses to understand how the EU is regulating AI.
And for any further concerns, questions or curiosity about regulating AI in the EU, don’t hesitate to contact us!
This piece is based on the general approach of the Council on the AI Act on 6 December 2022, and on the AI Liability Directive on 28 September 2022.