The explosive growth in the use of AI technologies has led to a worldwide debate on the legal and ethical issues surrounding the use of AI and the role of law in regulating it. In the same way that there seems to be no limit to where AI can be used, there are no specific boundaries for AI when it comes to law. The fact that we are creating artificial intelligence that in some ways can match or even surpass human intelligence means that there are important human rights considerations that need to be addressed.
The evolution of AI
When we think about what AI actually is, it is important to recognize that—at different times—the term has been associated with different technologies and capabilities.
In the early years of AI, software was programmed to perform specific actions, defined by software engineers, when it received specific inputs. This meant the actions of an AI were very clear, predictable and determined. If there was a mistake, it was a mistake in the code and it could be directly apportioned to those that created it.
More recent types of AI are created using neural networks and similar technologies, which process huge data sets to look for patterns, perform a statistical analysis and establish a prediction. This is important in order to understand how exactly a neural network can “make a decision”—on the principle of predicting what the corresponding decision should be, based on examples already available.
This automated decision-making constitutes the core of the problem of AI in a legal context, and it has important implications for business. From a legal perspective, there is no longer a direct relationship between the decisions of an AI system and those who created its underlying software.
The EU Commission has defined AI as software that can, for a given set of human-determined objectives, generate outputs such as content, predictions, recommendations or decisions suitable to influence the environments they interact with. This raises the concern that modern AI systems—based on neural networks and machine learning, in contrast to pre-programmed algorithms—can be much more opaque and raise more questions, specifically in terms of (i) how an AI system processes information and takes decisions, and (ii) who/which entity should be held liable for such decisions.
AI and the law: An evolving relationship
Andrea Miskolczi, Europe Director of Innovation, Dentons“Artificial intelligence and the law have a much earlier connection than many realize. Did you know that in the 1980s, legal reasoning was one of the major fields of research of artificial intelligence scholars? Many believed that a rule-based approach—using symbolic logic, rules engines, expert systems, and knowledge graphs—was the most promising path to developing artificial intelligence. Therefore, early research interest in AI focused on such rule-based systems.
The idea of modelling legal thinking and compliance with such types of technology appeared to be an ideal field to build an autonomous reasoning machine. But despite many research projects, the efforts were unsuccessful. By the beginning of the 1990s, artificial intelligence experts came to realize that they had underestimated the complexity of legal thinking.
Nowadays, law and artificial intelligence have a new type of close relationship. We are looking to the law to set boundaries on the prevailing type of AI technology—which uses an approach based on data (machine learning, deep learning, neural networks, natural language processing, etc.)—in order to ensure that privacy and other human rights are protected.”
Accountability for decision-making
A fundamental idea related to justice in the democratic sense of the word is that the one who makes a decision must also be held accountable for that decision. In other words, the person (or thing) making the decision must be a capable “social agent.”
This leads to the general principle that whenever an AI solution is used for decision-making, it must be possible for a human being to challenge or override such a decision. There are some types of decisions that humans should simply not delegate to artificial intelligence, such as the administration of justice or other decisions requiring emotional intelligence and ethical consideration.
Protecting fundamental rights
An AI system’s logic may not always coincide with human logic or values, so our legal systems must ensure that that AI does not go against our societal values and human rights.
European institutions have consistently called for a joint European approach to AI, grounded on fundamental human rights and ethical principles. The Feasibility Study of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) identified a number of legal challenges and principles that derive from the European Convention on Human Rights. Likewise, the High Level Expert Group set up by the European Commission has established Ethics Guidelines for Trustworthy AI (available here) and the Assessment List for Trustworthy AI (ALTAI) (available here).
Ensuring the development of a robust and trustworthy AI is not just the responsibility of governments, but also of businesses. Businesses will need to follow a “security and human rights by design” approach throughout the AI value chain. This may lead to, among other things, new controls, disclosure and transparency obligations. AI sustainability may in fact become a new frontier for corporate Environment, Social and Governance (ESG) initiatives and reporting.