This Q&A is a follow up to the webinar “Challenges and opportunities of AI. Overcoming the lack of trust in AI” of March 9, 2022. The questions were directed to Brando Benifei, a co-rapporteur of the European Parliament, and his answers are summarized below. You can view the Q&A in full in this video:
Question 1: What is the role of a co-rapporteur on the AI Act?
A co-rapporteur is a member of the European Parliament, in this case who is in charge of negotiating the text of the future AI Act with the European Commission and the Council of the EU.
By the end of 2022, the European Parliament’s position should be set out in the proposal of the European Commission, and in 2023 we will focus on negotiating a draft regulation with the EU. The expectation is to have the AI Act published by end of 2023. Please note that this is regulation, it is not a directive, so it will be immediately put into effect.
Question 2: With the final AI Act anticipated in 2023, what do you think should and should not be changed in the current draft?
- We, as the European Parliament, will support an amendment, which we know the Council of the EU will present, on extending the social scoring ban to private entities. This would forbid the use of AI in making classifications of citizens for illegal uses—something which is occurring in other parts of the world. The European Parliament is considering whether certification for some of the uses that can be considered high risk might deserve external scrutiny, given the complexity and the possible negative effects on the fundamental rights of citizens. Issues such as those regarding life, health, family issues, or citizenship status, for example, should be considered as high-risk areas. We should have extra ex–ante certification in such cases.
- We are thinking of amendments to the way sandboxes are being designed, i.e. the spaces for innovation and experimentation outside of the legislative framework to provide greater flexibility for companies.
- One thing that we do not want to change, and that is already in the draft Act, is the ban on real-time biometric recognition. We will look at the exemptions that are there for law enforcement, but we don’t want this practice to be generally adopted, as that may create a lot of problems in terms of control over the life of citizens.
- I am also convinced that the Council of the EU, as well as the European Parliament, will support a modification of the proposal of the European Commission that requires the data sets for high risk to be without errors. We know that this error-free requirement may become an excessive and challenging request for companies, so we are aiming at not leaving that formulation in place because we think we cannot request error-free data sets.
So as you see, there are aspects where we as the European Parliament expect to be in line with the Council of the EU and others where we will make proposals for amendments; indeed, there will be amendments that we will then need to negotiate, and others where we are and will be in line with the European Commission, like the biometric surveillance ban, for example.
Question 3: What are the biggest challenges you face when dealing with regulating AI?
Artificial intelligence is not something that is yet fully known or developed. This is one of the biggest challenges. Scientists and engineers themselves are often amazed by the results produced by AI systems. In addition, AI covers different sectors and, as you know, the European Parliament normally works via committees, such as the Economic Affairs Committee, the Agriculture Committee and the Employment Committee, for example; and in this case we have to regulate something that will have an impact on all sectors, which is why not only the negotiations, but also the drafting of the Act itself, are particularly complex. More than ever it’s important to consult with all stakeholders, both from industry and civil society, in order to have a comprehensive picture of the possible unexpected effects of AI and its regulation.
Certainly, the GDPR has prepared this field for a type of regulation based more on principles (such as accountability and transparency) and on risk-based approaches, which are useful when it’s not possible to regulate very specifically every single sector. So, a crucial point will be to have legislation that is adaptable, that has flexibility to be updated when things change. And there will be change, as AI is still developing and this fast-paced development is likely to continue.
The AI special committee, of which I am a member, will soon produce a paper on AI, which will be very encompassing and will not only explain the direction we take with AI legislation but will also set out other legislation that will come—for example, liability for AI products, which is not part of the draft AI Act. Other examples of further legislation needed is the one addressing the issue of permanent training and lifelong learning on AI, which is also only partially taken care of in this draft AI Act. The European Parliament has been working from both a political point of view as well as on the actual legislation.
Question 4: Do you agree with the opinion that the European Union is losing the AI race? What can be done to help companies in the EU in terms of adoption of AI?
The European Parliament is supportive of the trust-based approach on AI that the European Commission has developed. We think we can achieve a competitive model based on our fundamental values that is also capable of building trust in a complex context. So we are going in this direction, also noting that the European Parliament, in its paper, will go in a similar direction as the European Commission. We are also convinced that we can develop healthy competition and also influence other models in the world by being true to our values; that is to put the human being at the center, i.e. the human-centric approach. So we need to be ready to develop incentives, to reserve spaces also for easier developments of AI, to work on the development of AI ecosystems that we have seen around Europe. We need to put enough investment in, because it cannot be only about regulation. We need to have public and private investment, and we need the regulation to also be helpful to support investment.
It is also important that we look at the issue of our digital sovereignty and how we develop our model for AI and the digital space, because that’s a part of how we can be a strong Europe in a difficult word like the one we are in now.
Question 5: What are your views on the definition of AI system in Article 3?
We are looking into how to adjust the definition in the sense that we want to be sure that there are no limitations to algorithms—because we not only need to be ambitious in the regulation, but we also need to be clear on what is part of the regulation, what is not part of it and, I think, we will push for some general indication on algorithmic safety in the field of health. However, we cannot just transpose the AI into algorithms, so we need to be careful and, probably, we will try to do some adjustments. But when I say adjustments it’s because we are not so convinced that we need to fundamentally change the definition that was proposed by the European Commission. We need to think on it but not change it radically—we think it’s already quite balanced.
Question 6: What can be done to avoid the conformity assessment becoming a mechanical check list?
We obviously need to put constraints and legal controls in place in relation to those that do the certification for themselves—the self-certification—so there must be checks, there must be resources for the public institutions to be able to guarantee safety, and we must be able to bear the realities of this self-assessment with clear responsibilities in the chain of AI use. So, these can be ways to avoid the risk of having a mechanical checklist.
The positions in this article and video are those expressed by Mr Benifei at the time of the Dentons AI webinar on March 9, 2022, and not all of them could be reflected in the AI draft report presented on April 11, 2022 (see more details here: Leading MEPs raise the curtain on draft AI rules – EURACTIV.com).
Brando Benifei is an Italian member of the European Parliament from La Spezia serving his second term and is the Head of Delegation of Partito Democratico in the European Parliament. He is a member of the Special Committee on Artificial Intelligence in a Digital Age and has recently been appointed co-Rapporteur for the Artificial Intelligence Act in the Committee on Internal Market and Consumer Protection (IMCO). Home | Brando BENIFEI | Deputati | Parlamento Europeo (europa.eu)