Skip to content

Brought to you by

Dentons logo

Business Going Digital

Helping companies in the digital transformation of their business.

open menu close menu

Business Going Digital

  • Home
  • Regions
    • Asia Pacific
    • EMEA
    • Latin America and the Caribbean
    • North America
  • Sectors
    • Automotive
    • Energy
    • Financial Institutions
    • Government
    • Infrastructure
    • Manufacturing
    • Real Estate
    • Retail
  • Podcasts/Videos
    • Podcasts/Videos
    • The future of European AI regulation: Q&A with Brando Benifei
    • Artificial intelligence: EU Commission’s proposal
    • The EU VAT e-commerce package
    • Meeting the challenge of digitalization
  • Interactive tools
    • Interactive tools
    • Digital Signatures Tracker
    • Europe Cookie Law Comparison Tool
    • Global Autonomous Vehicles Index
    • Global FinTech Comparison Tool

The legal perspective on artificial intelligence: An evolving relationship

By Vladislav Arkhipov and Victor Naumov
December 2021
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail
Share on Facebook Share on Twitter Share via email Share on LinkedIn

The explosive growth in the use of AI technologies has led to a worldwide debate on the legal and ethical issues surrounding the use of AI and the role of law in regulating it. In the same way that there seems to be no limit to where AI can be used, there are no specific boundaries for AI when it comes to law. The fact that we are creating artificial intelligence that in some ways can match or even surpass human intelligence means that there are important human rights considerations that need to be addressed.

The evolution of AI

When we think about what AI actually is, it is important to recognize that—at different times—the term has been associated with different technologies and capabilities.

In the early years of AI, software was programmed to perform specific actions, defined by software engineers, when it received specific inputs. This meant the actions of an AI were very clear, predictable and determined. If there was a mistake, it was a mistake in the code and it could be directly apportioned to those that created it.

More recent types of AI are created using neural networks and similar technologies, which process huge data sets to look for patterns, perform a statistical analysis and establish a prediction. This is important in order to understand how exactly a neural network can “make a decision”—on the principle of predicting what the corresponding decision should be, based on examples already available.

This automated decision-making constitutes the core of the problem of AI in a legal context, and it has important implications for business. From a legal perspective, there is no longer a direct relationship between the decisions of an AI system and those who created its underlying software.

The EU Commission has defined AI as software that can, for a given set of human-determined objectives, generate outputs such as content, predictions, recommendations or decisions suitable to influence the environments they interact with. This raises the concern that modern AI systems—based on neural networks and machine learning, in contrast to pre-programmed algorithms—can be much more opaque and raise more questions, specifically in terms of (i) how an AI system processes information and takes decisions, and (ii) who/which entity should be held liable for such decisions.


AI and the law: An evolving relationship

Andrea Miskolczi, Europe Director of Innovation, Dentons

“Artificial intelligence and the law have a much earlier connection than many realize. Did you know that in the 1980s, legal reasoning was one of the major fields of research of artificial intelligence scholars? Many believed that a rule-based approach—using symbolic logic, rules engines, expert systems, and knowledge graphs—was the most promising path to developing artificial intelligence. Therefore, early research interest in AI focused on such rule-based systems.
The idea of modelling legal thinking and compliance with such types of technology appeared to be an ideal field to build an autonomous reasoning machine. But despite many research projects, the efforts were unsuccessful. By the beginning of the 1990s, artificial intelligence experts came to realize that they had underestimated the complexity of legal thinking.
Nowadays, law and artificial intelligence have a new type of close relationship. We are looking to the law to set boundaries on the prevailing type of AI technology—which uses an approach based on data (machine learning, deep learning, neural networks, natural language processing, etc.)—in order to ensure that privacy and other human rights are protected.”

Accountability for decision-making

A fundamental idea related to justice in the democratic sense of the word is that the one who makes a decision must also be held accountable for that decision. In other words, the person (or thing) making the decision must be a capable “social agent.”

This leads to the general principle that whenever an AI solution is used for decision-making, it must be possible for a human being to challenge or override such a decision. There are some types of decisions that humans should simply not delegate to artificial intelligence, such as the administration of justice or other decisions requiring emotional intelligence and ethical consideration.

Protecting fundamental rights

An AI system’s logic may not always coincide with human logic or values, so our legal systems must ensure that that AI does not go against our societal values and human rights.

European institutions have consistently called for a joint European approach to AI, grounded on fundamental human rights and ethical principles. The Feasibility Study of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) identified a number of legal challenges and principles that derive from the European Convention on Human Rights. Likewise, the High Level Expert Group set up by the European Commission has established Ethics Guidelines for Trustworthy AI (available here) and the Assessment List for Trustworthy AI (ALTAI) (available here).

Ensuring the development of a robust and trustworthy AI is not just the responsibility of governments, but also of businesses. Businesses will need to follow a “security and human rights by design” approach throughout the AI value chain. This may lead to, among other things, new controls, disclosure and transparency obligations. AI sustainability may in fact become a new frontier for corporate Environment, Social and Governance (ESG) initiatives and reporting.

***

This article is a chapter from Dentons’ Artificial Intelligence Guide 2022. Click here to access other chapters or download the full guide.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest articles by email.
Stay in Touch
Vladislav Arkhipov

About Vladislav Arkhipov

Vladislav Arkhipov is a Counsel in Dentons’ Russian IP, IT and Telecommunications practice.

All Articles Full bio

Victor Naumov

About Victor Naumov

Victor Naumov is a St. Petersburg Managing Partner, Head of the Russia IP, IT and Telecommunications practice, Co-Head of Europe Internet & Tech Regulatory.

All Articles Full bio

Related Articles

  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

Innovation vs. regulation: How governments are responding to the challenges presented by AI

By Simon Elliott, Amy Gault, and James Fox
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

AI strategy: Six steps to create your artificial intelligence road map

By Amanda Lowe and Giangiacomo Olivi
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

Key challenges of artificial intelligence: Contracting for the purchase and use of AI

By Pieter Jan Aerts, Domien Kriger, and Giangiacomo Olivi

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2022 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site