The European Commission’s “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence” (“Draft AI Regulation”) is far from being final and even further from taking effect. Indeed, it was published on April 21, 2021, is open to consultation until August 6, 2021, and is subject to the ordinary legislative procedure, which requires not less than two years to conclude.
We already reviewed some of the aspects of the Draft AI Regulation in our recent article focusing on “high-risk” AI systems. In addition, we have had a chance to talk and brainstorm about the Draft AI Regulation with good friends and notable clients, sharing our first comments and impressions. In this article, we have shortlisted the top 10 issues that have emerged so far in our conversations: they include five things that were generally liked, and five things that a number of commentators would have wanted more of.
Five things that were generally liked
- Definition of an artificial intelligence system
After years spent wondering what an artificial intelligence system is, we now have a definition: “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (“AI System”).
Compared with older definitions, this one does not make any reference to personal data. Such a lack is only apparent, since the Draft AI Regulation focuses a lot on personal data: both in requiring that high-risk AI Systems are trained using only high-quality sets of data, and in the many similarities with the GDPR that lead one to infer that the GDPR served as a “starting point” for the Draft AI Regulation.
2. Similarities with the GDPR
The fact that the Draft AI Regulation has many similarities with the GDPR should come as no surprise, bearing in mind that (as we all know) artificial intelligence is fed by data.
These similarities include:
The Draft AI Regulation and GDPR are both based on an accountability principle. In particular, we note the low level of detail used to describe the regulatory compliance fulfilments, as well as the requirement to provide technical documentation suitable to prove the compliance of the AI System with the Draft AI Regulation. It also has an extended scope of application, which includes: (i) all providers placing AI Systems on the EU market, regardless of where they are located, (ii) users of AI Systems located in the EU and (iii) providers and users located in third countries, where the output of the AI System is used within the EU.
Both feature regulatory and compliance requirements for high-risk AI Systems, such as transparency obligations towards users, the duty to appoint an authorized representative in the EU for providers established outside of the EU, and reporting requirements for serious breaches and malfunctioning. Furthermore, the Draft AI Regulation provides for the same fines as the GDPR, plus an even higher fine: €30 million or 6% of the total annual turnover of the preceding financial year, whichever is higher.
3. The European Artificial Intelligence Board
Another similarity with the GDPR is the establishment of the European Artificial Intelligence Board, composed of representatives of national supervisory bodies and the European Data Protection Supervisor and chaired by the EU Commission.
The European Artificial Intelligence Board, like the European Data Protection Board, will be in charge of supporting EU member states in the uniform interpretation and application of the new regulatory regime. This task is of paramount importance, taking into account that the EU Commission has opted for a regulation (rather than a directive), thus confirming the need for the most consistent legal regime to be implemented Europe-wide.
Furthermore, the European Artificial Intelligence Board will also help in achieving a unitary approach to the European strategy.
4. AI regulatory sandboxes
AI regulatory sandboxes make it clear that the purpose of the Draft AI Regulation is not to limit, but to foster innovation.
They will offer a controlled environment, which facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service, under the direct supervision and guidance from the competent authorities (including data protection authorities, where personal data are processed within the sandbox). In doing so, the EC aims to ensure compliance with the requirements of the Regulation and, where relevant, other Union and member states legislation supervised within the sandbox.
5. The risk-based approach
Almost all the new rules will apply to high-risk AI systems only.
The EU Commission pointed out that most AI systems entail a minimal risk and, therefore, fall out of the scope of the Draft AI Regulation. This clearly results in a simplification, since the (rich) new regulatory and compliance regime – also considering that certain AI systems are banned – will be applied in a limited number of cases. Consequently, it is unlikely that the Draft AI Regulation will disincentivize the use of AI Systems.
Five things that a number of commentators would have wanted more of
- Criteria to determine the level of risk carried out by an AI system
As said, the risk-based approach proposed by the Draft AI Regulation is undoubtedly appreciated; however, clearer criteria to determine to which risk category an AI system belongs may be needed.
It is predictable that many AI systems will fall halfway between the high-risk and the limited-risk categories. In these cases, the decision to consider the AI system as high risk (for precautionary reasons) would lead to the application of a very different regulatory/compliance regime, with definitely more onerous compliance requirements. Such requirements would apply not only by the developer of the AI system, but to all people and entities involved in the AI system lifecycle (in light of the extended scope of application proposed by the EU Commission). The risk is clearly that, when in doubt, the decisions will be in favor of the lower level of risk, with a consequent lower protection for individuals.
Furthermore, the Draft AI Regulation does not provide any criteria to distinguish between limited-risk and minimal-risk AI Systems, and therefore, differentiating between the two will not be easy. In such a case, however, adopting a precautionary approach and deciding to consider the AI system as low risk (rather than as minimal risk) will entail only one regulatory requirement: i.e. disclosing to the users that they are interacting with an AI system, thus giving them the opportunity to decide to stop the interaction.
2. Rules on liability
One of the major concerns when speaking about artificial intelligence is: who is liable for defaults and damages attributable to acts and omissions of the AI system?
It is clear that the EU Commission is trying to reinforce trust in AI by adopting an ex ante approach, i.e. trying to ensure that only safe AI systems are used, by reinforcing regulatory and compliance rules to be complied with before placing the AI system on the market, as well during its lifecycle.
However, clear rules on allocation of liability may also help in achieving this purpose. When individuals know (for sure) that someone (who can be clearly and easily identified) is ultimately liable for any default of the AI System, they will undoubtedly be more prone to use (and trust) an AI System.
It has been disclosed that new rules on liability regimes will soon be issued: hopefully, they will be applicable to all AI systems (and not only to high-risk AI systems).
3. Criteria to assess fines
Like the GDPR, the Draft AI Regulation provides for maximum fines only (ranging from €0 to €30 million or 6% of the total annual turnover of the preceding financial year, whichever is higher), without clarifying in detail the criteria which will be used to assess the amount of the fine to be concretely issued in each case.
As happened with the GDPR, there will likely be guidelines from the European and national authorities on how to assess fines, but if such guidelines are not sufficiently detailed and specific, the difficulties experienced in the past three years of application of the GDPR will re-occur. Risks in using AI systems will not be concretely assessable, and this may result in a disincentive to employ artificial intelligence.
4. Human oversight and explainability requirements
The Draft AI Regulation requires that high-risk AI systems are explainable, and therefore understandable, by human beings. This is rather utopian, since the functioning of AI systems may not be fully clear to their developers themselves.
To ensure that the Draft AI Regulation remains relevant and is applied in its entirety, a more concrete approach would be appreciated. For example, it may be advisable to provide exceptions to the general rule, taking into account when the control over the AI systems has to be controlled itself.
5. Data transfer practical guidelines
We all know the problems that data controllers and data processors are (still) experiencing after the so-called “Schrems II Judgement” to lawfully transfer personal data outside the European Union.
This judgment will undoubtedly affect data transfers for the purposes of constituting the high quality data sets to train the AI systems, as required by the Draft AI Regulation. It is predictable that the more difficulties and burdens in the processing of personal data, the more difficulties and burdens in ensuring the highest quality of data sets.
Perhaps, new and clear rules on data transfers will be issued soon and may also include derogations, exceptions or specific provisions to be applied for AI systems.
What are your top 10 issues with regards to the Draft AI Regulation? Do you agree with the above or do you have more?
Feel free to contact us to comment, we will be happy to share ideas!