As businesses increasingly adopt the use of artificial intelligence (AI), its deployment continues to raise legal and ethical questions, which regulators seek to harmonize. For the EU Commission, on the one hand it aims at nothing less than to turn the EU into a global hub of trustworthy and human-centric AI by implementing a broader, harmonized, horizontal regulatory framework for AI. On the other hand, the EU wishes to promote the technological development of AI.
On April 21, 2021, the EU Commission published a proposal for a regulation on harmonized rules for AI (the “Regulation”). In this article, we will
- summarize key points of the Regulation;
- discuss the EU Commission’s approach for prohibited AI practices as well as “high-risk” AI systems;
- provide examples of how the announced approach affects businesses in various sectors.
It is worth noting that the Regulation provides for significant sanctions in the event of a breach of obligations. Similar to the approach in the GDPR, fines of up to €30 million, or 6% of a company’s annual global turnover of the previous fiscal year, can be imposed. However, the Regulation proposes that the interests of small-scale providers and startups, among others, are to be taken into account. It is likely that this severe sanctions regime is intended to contribute to the Regulation’s goal to set global standards in the area of AI.
What is the approach of the EU Commission and the key points of the Regulation?
Although AI affects numerous laws, for example, product safety and the protection of personal data, the development of AI systems themselves and the content of algorithms are not yet regulated in the EU or elsewhere. The Regulation, as the world’s first-ever legal framework on AI, intends to serve as the legal basis for dealing with AI and to close the regulatory gaps. Moreover, the Regulation is a key piece of the EU Commission’s goal to achieve the “European Strategy for data”, which comprises the “Data Strategy” and the “White Paper on Artificial Intelligence”.
The EU Commission defines AI as a software that is developed with one or more certain techniques and approaches (listed in Annex I of the Regulation) and which is able, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
Defining terms in this field is as important as it is difficult and it remains an ongoing process. As a result of the broad approach of the definition, a very broad range of algorithms and solutions will be covered and the industry will likely want to narrow the definition of AI in the course of consultations. However, AI is already widely used in a variety of ways and is always several steps ahead of regulation.
Which AI systems are prohibited and what is meant by “high-risk” AI systems?
The Regulation takes a risk-based approach, with the level of restrictions and regulations linked to the level of risk of AI systems.
In certain cases, AI applications are explicitly prohibited, mostly when conflicts with fundamental rights (as protected by the EU Charter of Fundamental Rights) arise. These include, in particular, systems that are capable of inflicting physical or psychological harm on a person through subconscious techniques.
The Regulation indicates criteria to determine whether a system should be considered “high risk”. This is generally the case when AI systems endanger the life or health of persons or their fundamental rights. As a side note, the Regulation comes with an additional list of high-risk AI systems in an Annex (available here), which can be amended in the future in line with the evolution of AI use cases to keep pace with rapid technological development.
Examples of “high-risk” AI
The automotive industry is on an ongoing journey from assisted to autonomous driving. The current development process typically requires information from cameras, LIDAR and radar systems. AI will soon become a prerequisite to achieve autonomous transport (e.g. in the shipping industry or intelligent aircraft). Rear-view cameras, connected cars, blind spot monitoring and collision avoidance will, for example, make driving safer. However, AI systems intended to be used as safety components in the management and operation of road traffic are considered high-risk AI under the Regulation. Therefore, fully unmanned autonomous vehicles might be prohibited because such systems are capable of inflicting physical harm on a person.
In intelligent airline operations, airlines are increasingly making use of facial recognition technology to improve the check-in process and handle security challenges. Since such operations involve biometric identification and categorization of natural persons, these are also considered high-risk AI.
AI systems that are able to make decisions about employment relationships are considered high-risk AI. For example, an AI system that can sort CVs, analyze speech, facial reactions and the body language of a candidate to help a company in making recruitment decisions.
Another example of high-risk AI are systems used in essential private and public services, for example, credit scoring systems, which may deny people access to financing.
Finally, pairing of 5G and AI opens the door to large network monetization opportunities. In the context of rapid learning, large amounts of data will be gathered and ultra-reliable, high-bandwidth networks will become essential. But also the other way around: key 5G suppliers anticipate artificially intelligent network operations. For instance, the use of AI solutions will help carriers use infrastructure more efficiently. AI may also help network operators to predict the spectrum and bandwidth of their 5G network and identify points of failure. Another example, the pairing of 5G and AI via drones, is an exciting way of extracting value out of 5G, but in certain cases it might be considered not only high-risk AI but even prohibited AI.
The impact of prohibited AI and high-risk AI will result in challenges and hurdles for the developers and users of AI and should be carefully analyzed before making any investments. It will need to be addressed in the ongoing legislative process in the EU.
What are the key steps to commission “high-risk” AI systems?
Before commissioning a high-risk AI system, providers must undergo a conformity assessment. They need to demonstrate that their systems meet the mandatory requirements for trustworthy AI (e.g. regarding data quality, documentation and traceability, transparency, human oversight, accuracy, robustness and cybersecurity). If the system or its purpose is significantly modified afterwards, the assessment will have to be repeated. In some cases, an independent notified body needs to be involved.
Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure compliance of the systems with the requirements of the Regulation and to minimize risks for users and affected persons ̶ even after having placed a product on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the option to report on serious incidents or breaches of obligations to protect fundamental rights of which they have become aware.
Furthermore, high-risk AI systems must be limited in terms of data protection, time and geographic reach and require authorization from a judicial or other independent body. The accuracy of such systems for facial, gait or voice recognition can vary significantly based on a wide range of factors, such as camera quality, light, distance, database, algorithm, and the subject’s ethnicity, age or gender. Even a 99% accuracy rate of these highly advanced systems is highly risky, for example, if an innocent person is suspected. If thousands of people are exposed to a high-risk AI system, even a tiny error rate may seriously affect a significant number of individuals.
Do users need to be informed about AI use?
The Regulation stipulates that the users of an AI system must be informed about the use of AI in certain cases. This applies in particular to high-risk AI systems and AI systems intended to interact with natural persons. Information obligations also arise for deepfakesand AI systems that can emotionally influence users. Deepfakescan be particularly used to misrepresent people (e.g., by “face swapping” in a video or photo to show a person in a different context).
To whom do the obligations apply?
Briefly, the Regulation applies if the AI system is placed on the EU market or its use affects people in the EU. Primarily, creators of AI systems are accountable according to the Regulation (e.g. software companies offering their AI solutions in the EU). However, the legal framework will also apply to importers, distributors, users of AI systems and third parties in certain constellations, also those from outside the EU, if the AI solution is made available on the EU market.
How can organizations ensure transparency, accountability and auditability at the operational level also with respect to processing of personal data?
Processing of personal data as a key component of AI plays an important role in the implementation of the new legal framework. When approaching AI, and in particular risk assessments, a common concern of privacy professionals is the knowledge gap between regulatory requirements and practical implementation. Data protection officers may find it difficult to meet (let alone demonstrate) the requirements of transparency, fairness, human intervention, or accountability if it is a challenge to (technically) understand the systems they are advising on.
Concerns about how to accurately mitigate risk, a general lack of expertise in AI, and the increased costs of new infrastructure demands can act as a burden when implementing AI.
How to cope with the challenges that arise from privacy concerns when companies, for example feed more and more personal data into AI to create new information? In brief, the right to privacy applies to AI. Companies should therefore work out strategies to balance the right to privacy and the economic and public interest objectives AI might have. In particular, companies should start with a documented assessment of the starting situation and objectives of using AI and the identified privacy challenges. On this basis, companies can set up a compliance structure for the handling of AI and privacy.
- The proposal for the Regulation as such does not require immediate action. The Regulation will have to go through the EU legislative process, which is likely to take not less than a year in view of the complexity of the issue and the associated need for (probably controversial) discussions, which companies can get involved in (e.g. by use of lobby or interest groups).
- Nevertheless, the proposal is worth reading even in its current stage because it indicates the direction in which the EU Commission is thinking, which seems to be in terms of bans and penalties. The need to promote innovation and establish a secure legal framework are only partially addressed by the current proposal, not to mention the ethical dimension, which is still far from being settled.
- For the time being, practitioners dealing with AI will need to continue to rely on “creative solutions” based on the instruments of intellectual property rights, data protection law, cybersecurity law, contract drafting and product liability, thus filling the AI-specific legal vacuum.
Please contact us if you would like to discuss this further.