Updated in April 2021.
The issue of data privacy is often among the first risks that come to mind in relation to artificial intelligence, since developing, testing and implementing AI technologies often involve the processing of personal data. According to Dentons’ AI survey, 81 percent of respondents cited personal data protection as a significant concern.
Users of AI technology that operate in the European Union or that serve customers in Europe need to be aware of the EU data protection framework, including notably the General Data Protection Regulation (GDPR). In addition, on April 21, 2021, the European Commission published its “Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts” (the Draft AI Act – available here, in English as well as other EU languages; the Slovenian Presidency of the Council of the European Union published its compromise text on November 29, 2021—available here, in English), which also has important data protection implications.
A risk-based approach to privacy
Both the GDPR and the Draft AI Act take a risk-based approach to privacy.
According to the GDPR, where the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals (which may be the case when AI systems are used), the controller is required to carry out a data protection impact assessment (DPIA) to evaluate the likelihood and severity of that risk. It must then determine possible mitigating measures.
Similarly, the Draft AI Act distinguishes uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk. Providers of AI systems that are likely to pose high risks to the safety or fundamental rights of individuals must conduct an upfront risk assessment.
Artificial intelligence and the right to privacy
Guido Scorza, Commissioner of the Italian Data Protection Authority
“While we cannot block the spread of artificial intelligence, we need to govern and orient it. If we leave the market free to regulate itself—which is largely what has happened up until now—there is a risk that everything that is technologically possible could also be considered legally legitimate. This cannot be the case in our democratic societies.
Fundamental rights, starting with the right to privacy, must represent a natural constraint in the design and implementation of artificial intelligence solutions and must be inoculated by design and by default.
In a large part, the General Data Protection Regulation marks this path and equips supervisory authorities with tools to enforce the rules. Adding to this, I believe we should extend the obligation to respect the same principles of privacy by design and by default to manufacturers of smart devices and smart service providers. This would avoid the risk that data controllers and data processors could have instruments to circumvent European rules on data protection.”
In their joint opinion on the Draft AI Act, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) stressed that the classification of an AI system as “high-risk” creates a presumption of “high-risk” under the GDPR as well. This will trigger the need for a Data Protection Impact Assessment (DPIA) in addition to the conformity assessment under the Draft AI Act.
As risk assessments and DPIAs stem from different regulations, businesses should not assume that a High-Risk AI System (or HRAIS) that is admissible under the Draft AI Act is therefore also lawful under the GDPR. This will require a separate legal analysis, although the tools used for DPIAs may be revised to be suitable for the risk assessment under the Draft AI Act as well. Users of AI systems may use the information from the AI risk assessment done by the provider of the AI system as input for their DPIA.
81% of respondents cited personal data protection
as a challenge to implementing AI.
Prohibited uses of AI
Separately to the HRAIS, the Draft AI Act sets out a list of “prohibited AI practices.” Businesses should be aware that certain broadly described practices are prohibited, unless a statutory exception applies. The following uses are examples of prohibited AI practices:
- Use of an AI system deploying subliminal components or resulting in exploiting any vulnerabilities of a specific group of persons—due to their age, disability or social-economic situation—to materially distort behavior and cause physical or psychological harm;
- Any type of social scoring practices that could lead to discriminatory outcomes and the exclusion of a certain group.
In this context, we note that the compromise text of the Draft AI Act, released on November 29, 2021 by the Slovenian Presidency of the Council of the European Union, broadens the scope of prohibited AI practices, including by prohibiting real-time (and not only remote) biometric identification of individuals in publicly accessible spaces by law enforcement authorities or on their behalf (except in specific circumstances) and social scoring practices performed over a certain period of time, regardless whether by public authorities or private actors. Previously, only the public sector was targeted.
High-Risk AI Systems
The updated text of the Draft AI Act also provides for a comprehensive list of HRAIS. Businesses should be aware of the use of AI systems (non-exhaustive) that are:
- Intended to be used for the recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications;
- Intended for making decisions on promotions and terminations of work-related contractual relationships, for task allocation based on individual behavior or personal traits or characteristics, and for monitoring and evaluating performance and behavior of persons in such relationships;
- intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small-scale providers for their own use;
- intended to be used for setting insurance premiums, underwriting and assessing claims.
AI system providers located outside the EU
An important obligation for providers located outside the European Union offering AI systems in the EU will be to appoint by written mandate an authorized representative established in the EU. The GDPR has a similar obligation, which is regularly overlooked. It will be interesting to see if providers will be able to find representatives willing to take on the responsibilities conferred upon them under the Draft AI Act. For GDPR representatives, it has been our experience that this is not always easy.
Transfer of personal data outside the European Economic Area
The use of AI technologies in many cases involves cross-border transfers of personal data. Where personal data is transferred from the European Economic Area (EEA) to a country outside the EEA, the GDPR sets additional requirements. Businesses will have to implement a transfer mechanism (such as the new Standard Contractual Clauses adopted by the European Commission). They will also need to assess those laws and practices in the third country related to access by public authorities to the transferred personal data. They may be further required to implement supplementary measures on top of the Standard Contractual Clauses.
Outside the EEA, more and more countries are adopting stricter rules on cross-border data transfers, including the obligations to obtain consent, ensure contractual safeguards and to follow data localization requirements.
Transparency
Transparency of personal data processing is one of the GDPR’s key principles, and this principle applies in full to any AI technologies that process such data. Achieving the required level of transparency can be a challenge with AI, thanks to its highly technical and complex nature, making it difficult to understand how data is processed through the system. In addition, personal data is processed in AI systems in two ways: Data serves as an input (e.g. to learn), and the AI system produces data as an output. Some AI systems then use that output data to make automated decisions in individual cases (such as in evaluations of creditworthiness). This may trigger additional GDPR requirements around automated decision-making; moreover, it requires users of an AI system to be mindful of the different ways the AI system can process personal data.
Furthermore, individuals may not always expect (let alone understand) the different ways an AI system processes their personal data. For this reason, it is crucial to provide users in advance with concise, transparent, intelligible information, in an easily accessible form, and using clear and plain language. Transparency is not just a GDPR requirement. The Draft AI Act contains separate transparency obligations, such as to ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system (unless this is obvious from the circumstances and the context of use).
Practical measures for businesses
1. Perform risk assessments on AI systems
Providers of HRAIS must do an upfront risk assessment. Depending on whether the provider also processes the personal data that runs through the AI system, it will likely also need to do a DPIA under the GDPR. It may be worthwhile to revise the existing tools used for the DPIAs and expanding them to capture the requirements for risk assessments under the Draft AI Act. Users of HRAIS can, in turn, use the risk assessment by the provider as input for their DPIA.
2. Adopt internal procedures and practices to ensure compliance with the Draft AI Act
We recommend that all stakeholders in the AI supply chain implement internal policies to ensure substantive compliance with the Draft AI Act and GDPR. Such policies could, for example, establish rules on which forms of AI should be avoided within the organization. This is not just a legal assessment, but also requires ethical considerations. In addition to the above, adequate controls on the quality of the data sets, from training data to output data, should be implemented.
3. Implement data transfer mechanisms and transfer impact assessments
Where there are cross-border data transfers when using an AI system, users should carefully consider which data transfer rules apply and implement and properly document the appropriate data transfer mechanisms. If the GDPR applies to the data processing, and the personal data is transferred to a non-EEA country, a transfer impact assessment should be done and properly documented.
4. Be transparent
Both the GDPR and the Draft AI Act contain transparency requirements. Users of AI systems can more easily achieve these requirements if AI systems work from a transparency-by-design approach, meaning that transparency is embedded in the system. When preparing privacy or other information notices, be sure to use plain language and provide meaningful information. This is increasingly required by supervisory authorities, which have already fined several companies for insufficiently clear information notices.
***