The issue of data privacy is often among the first risks that come to mind in relation to artificial intelligence, since developing, testing and implementing AI technologies often involves the processing of personal data. According to Dentons’ AI survey, 81% of respondents cited personal data protection as a significant concern.
Users of AI technology that operate in the European Union or that serve customers in Europe need to be aware of the EU data protection framework, including notably the General Data Protection Regulation (GDPR). In addition, on April 21, 2021, the European Commission published its “Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts” (the “Draft AI Act” – available here, in English; the Slovenian Presidency of the Council of the European Union published its compromise text on November 29, 2021 – available here, in English), which also has important data protection implications.
The risk-based approach to privacy
Both the GDPR and the Draft AI Act take a risk-based approach to privacy.
According to the GDPR, where the processing of personal data is likely to result in a high risk to the rights and freedoms of individuals (which may be the case when AI systems are used), the controller is required to carry out a data protection impact assessment (DPIA) to evaluate the likelihood and severity of that risk. It must then determine possible mitigating measures.
Similarly, the Draft AI Act distinguishes uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk. Providers of AI systems that are likely to pose high risks to safety or the fundamental rights of individuals must do an upfront risk assessment.
Artificial intelligence and the right to privacy
Guido Scorza, Commissioner of the Italian Data Protection Authority“While we cannot block the spread of artificial intelligence, we need to govern and orient it. If we leave the market free to regulate itself—which is largely what has happened up until now—there is a risk that everything that is technologically possible could also be considered legally legitimate. This cannot be the case in our democratic societies.
Fundamental rights, starting with the right to privacy, must represent a natural constraint in the design and implementation of artificial intelligence solutions and must be inoculated by design and by default.
In a large part, the General Data Protection Regulation marks this path and equips supervisory authorities with tools to enforce the rules. Adding to this, I believe we should extend the obligation to respect the same principles of privacy by design and by default to manufacturers of smart devices and smart service providers. This would avoid the risk that data controllers and data processors could have instruments to circumvent European rules on data protection.”
In their joint opinion on the Draft AI Act, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) stressed that the classification of an AI system as “high-risk” creates a presumption of “high-risk” under the GDPR as well. This will trigger the need for a DPIA, in addition to the conformity assessment under the Draft AI Act.
As risk assessments and DPIAs stem from different regulations, businesses should not assume that a high-risk AI system that is admissible under the Draft AI Act is therefore also lawful under the GDPR. This will require a separate legal analysis, although the tools used for DPIAs may be revised to be suitable for the risk assessment under the Draft AI Act as well. Users of AI systems may use the information from the AI risk assessment done by the provider of the AI system as input for their DPIA.
81% of respondents cited personal data protection
as a challenge to implementing AI.
Privacy-intrusive uses of AI
The Draft AI Act sets out a list of “prohibited AI practices.” Businesses should keep an eye on the proposed list and consider that the EDPB and EDPS call for broadening it to cover other privacy-intrusive AI uses, including:
- Any type of social scoring practices, not only when performed over a certain period of time or by public authorities or on their behalf, as the current proposal reads
- Remote biometric identification of individuals in publicly accessible spaces, meaning any automated recognition of human features such as faces, iris, fingerprints, DNA, voice, keystroke rhythms, etc.
- Use of AI to infer emotions of individuals
- AI systems categorizing individuals from biometrics into clusters according to ethnicity, gender, political or sexual orientation, or other features that may lead to discrimination.
In this extent, we note that the compromise text of the Draft AI Act released on November 29, 2021 by the Slovenian Presidency of the Council of the European Union proposes broadening the scope of prohibited AI practices, including to prohibit real-time (and not only remote) biometric identification of individuals in publicly accessible spaces by law enforcement authorities or on their behalf (except in specific circumstances) and social scoring practices performed over a certain period of time, regardless whether by public authorities or private actors.
Transfer of personal data outside the European Economic Area
The use of AI technologies in many cases involves cross-border transfers of personal data. Where personal data is transferred from the European Economic Area (EEA) to a country outside the EEA, the GDPR sets additional requirements. Businesses will have to implement a transfer mechanism (such as the new Standard Contractual Clauses adopted by the European Commission). They will also need to assess the laws and practices in the third country related to access by public authorities to the transferred personal data. Furthermore, they may be required to implement supplementary measures on top of the Standard Contractual Clauses.
Outside the EEA, more and more countries are adopting stricter rules on cross-border data transfers, including the obligations to obtain consent, ensure contractual safeguards and to follow data localization requirements.
Transparency of personal data processing is one of the GDPR’s key principles and this principle applies in full to any AI technologies that process such data. Achieving the required level of transparency can be a challenge with AI, thanks to its highly technical and complex nature, making it difficult to understand how data is processed through the system. In addition, personal data is processed in AI systems in two ways: data serves as input (e.g. to learn), and the AI system produces data as an output. Some AI systems then use that output data to make automated decisions in individual cases (such as in evaluations of creditworthiness). This may trigger additional GDPR requirements around automated decision making; moreover, it requires users of an AI system to be mindful of the different ways the AI system can process personal data.
Furthermore, individuals may not always expect (let alone understand) the different ways an AI system processes their personal data. For this reason, it is crucial to provide users in advance with concise, transparent, intelligible information, in an easily accessible form, and using clear and plain language. Transparency is not just a GDPR requirement. The Draft AI Act contains separate transparency obligations, such as to ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system (unless this is obvious from the circumstances and the context of use).
Practical measures for businesses
1. Perform risk assessments on AI systems
Providers of high-risk AI systems must do an upfront risk assessment. Depending on whether the provider also processes the personal data that runs through the AI system, it will likely also need to do a DPIA under the GDPR. It may be worthwhile revising existing tools used for the DPIAs and expanding them to capture the requirements for risk assessments under the Draft AI Act. Users of high-risk AI-systems can, in turn, use the risk assessment by the provider as input for their DPIA.
2. Adopt internal procedures and practices to ensure compliance with the Draft AI Act
We recommend that all stakeholders in the AI supply chain implement internal policies to ensure substantive compliance with the Draft AI Act and GDPR. Such policies could, for example, establish rules on which forms of AI should be avoided within the organization. This is not just a legal assessment, but also requires ethical considerations. In addition to the above, adequate controls on the quality of the data sets, from training data to output data, should be implemented.
3. Implement data transfer mechanisms and transfer impact assessments
Where there are cross-border data transfers when using an AI system, users should carefully consider which data transfer rules apply and implement and properly document the appropriate data transfer mechanisms. If the GDPR applies to the data processing, and the personal data is transferred to a non-EEA country, a transfer impact assessment should be done and properly documented.
4. Be transparent
Both the GDPR and the Draft AI Act contain transparency requirements. Users of AI systems can more easily achieve these requirements if AI systems work from a transparency by design approach, meaning that transparency is embedded in the system. When preparing privacy or other information notices, be sure to use plain language and provide meaningful information. This is increasingly required by supervisory authorities, which have already fined several companies for insufficiently clear information notices.