Skip to content

Brought to you by

Dentons logo

Business Going Digital

Helping companies in the digital transformation of their business.

open menu close menu

Business Going Digital

  • Home
  • Regions
    • Asia Pacific
    • EMEA
    • Latin America and the Caribbean
    • North America
  • Sectors
    • Automotive
    • Energy
    • Financial Institutions
    • Government
    • Infrastructure
    • Manufacturing
    • Real Estate
    • Retail
  • Podcasts/Videos
    • Podcasts/Videos
    • The future of European AI regulation: Q&A with Brando Benifei
    • Artificial intelligence: EU Commission’s proposal
    • The EU VAT e-commerce package
    • Meeting the challenge of digitalization
  • Interactive tools
    • Interactive tools
    • Digital Signatures Tracker
    • Europe Cookie Law Comparison Tool
    • Global Autonomous Vehicles Index
    • Global FinTech Comparison Tool

Key challenges of artificial intelligence: Liability for AI decisions

By Nieves Briz and Allison Bender
December 2021
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail
Share on Facebook Share on Twitter Share via email Share on LinkedIn

One of the legal issues that is always on the table when discussing AI is liability. It raises a number of concerns among regulators, businesses and customers alike.

What happens if an autonomous car crashes? Who is to blame? What kind of damages can be claimed?

If a healthcare worker follows the recommendation of an AI-based tool to treat a patient, who would bear liability for any treatment injury?

These concerns were echoed in our survey, with 80% of respondents indicating that the uncertainty about who is liable for acts and omissions of an AI system was cause for concern. Furthermore, 81% of respondents believe that legislation is needed to clarify the issue of liability in the context of AI—of which 46% of respondents stated that this is an urgent need.

Beyond the law

The discussion on liability regarding AI systems is not a strictly legal debate. The “Civil liability regime for artificial intelligence” study commissioned by the European Parliament found that regulations on liability have a substantial economic and social impact and play an important role in the development of AI. Liability rules create incentives to reduce risk and avoid engaging in risky activities. They also provide legal certainty, which helps companies to evaluate risks correctly and design their products accordingly, which in turn encourages innovation. Lastly, a clear liability framework enhances public trust in AI systems.

Factors to consider in assessing liability

In our survey, almost half (49%) of respondents expressed the opinion that there should be some form of joint liability for acts and omissions of an AI system.

In practice, the answer is more complicated. When establishing a liability regime for AI systems, any regulator or legislator will face the following challenges.

Autonomy

AI systems have the ability to act autonomously. Although the set of objectives of the AI system are human-defined, autonomy and the ability to learn are defining features of this type of technology. As such, there may be cases where the decisions or actions of the AI cannot be foreseen.

From a liability perspective, this has a direct impact on a basic requirement for liability: causation and the difficulty of attribution of responsibility to a specific party. If a system decides and learns autonomously, who is responsible for its decisions and, more importantly, for the possible harm caused by those decisions?

Connectivity

AI systems need to be interconnected with other systems, and they are dependent on data in order to function and/or learn. If we take autonomous cars as an example, the car communicates with other cars, traffic signals, road signs, etc.

This connectivity significantly increases the number of participants in the value chain needed to obtain the result, which in turn makes it more complex. Moreover, those participants, although interconnected, cannot control what the other party does with their data and are most likely not willing to assume responsibility for results or acts they have not directly carried out. From a liability perspective, this makes it difficult to evaluate where a damage originated and which actor is responsible for it.

Opacity

AI systems are not very transparent in terms of function and performance and are usually defined as opaque systems. This opacity has a significant impact on liability. First, the learning ability of artificial intelligence makes these systems very difficult to understand—requiring effort and technical capability that is prohibitively expensive and almost impossible to achieve. Secondly, in the event of a liability claim, the parties that have the data and algorithms—and who may be able to explain how an action led to harm—have no incentive to share the information since they could end up being liable.

This “black-box effect” makes it difficult to trace the decision-making process of these kinds of systems. Therefore, establishing the link between an act or omission and the damage caused is also difficult. The victim may not be able to make a claim since they do not know which party was controlling the AI system and which element (software, data, etc.) caused the harm.

This opacity also has a direct impact on the burden of proof, which is also a key aspect of any liability system. Normally, the general rule is that the claimant has to prove the damage caused and its causation. With AI systems, this general rule would be detrimental to victims since they could find it very difficult to prove anything, and therefore it may not be possible to obtain adequate compensation.

Liability regimes are diverse

Last, but not least, one of the main challenges that any legislator will face when regulating liability for artificial intelligence is that liability regimes are mostly based on national systems, which are complex and diverse. National liability rules have long-established legal traditions, and are often derived from a combination of laws, general principles and case law.

For example, in the United States, product liability under common law permits the recovery of damages when parties are injured by products that are not “reasonably safe” due to defective design, manufacturing, or warning.

Under US law, where AI is incorporated into software that is considered a medical device, traditional product liability would apply to the manufacturer of such device in the case of defective design or manufacturing. That said, AI may have many developers, and the training of models may rely upon variability in initial configurations and numerous data sets; thus, it may be difficult to determine which contribution led to the alleged design defect. Regarding appropriate warnings about the dangers of a device, generally the “learned intermediary doctrine” serves to protect the manufacturer, providing it advises the healthcare practitioner of the risks associated with the product. The healthcare practitioner is responsible for considering these risks in their professional judgment and advising the patient accordingly. This doctrine serves as a defense for the manufacturer in failure to warn cases. To the extent there is no “learned intermediary” in the care context, the manufacturer potentially may be directly liable. Furthermore, it may be difficult to fully and effectively advise a physician of all of the inputs, algorithms, etc. in the opaque “black box” of the AI. Similar questions regarding various torts (e.g., medical malpractice, vicarious liability) and informed consent may apply to AI.

There are, of course, exceptions to the national scope of most of these legal regimes. For example, the EU’s Product Liability Directive has achieved relatively harmonized liability rules. However, the Directive, as currently drafted, is not easily applicable to AI systems and is not an ideal legal solution.

There have been numerous discussions regarding the need for a specific liability regime for AI systems at a European level. In October 2020, the European Commission issued a resolution setting out a proposed regulation for a civil liability regime for artificial intelligence.

The steps you can take

Given the importance and complexity of liability in the context of artificial intelligence, companies should closely monitor the upcoming debates and legislative proposals.

From a liability perspective, we also recommend adopting the following measures to be properly prepared:

1. Learn your role in the AI system

Since AI systems have complex value chains, the role of the different companies involved could vary significantly and so could their liability. For example, a company operating an AI system and providing the data that it needs to operate would be more likely to face liability than a telecom operator providing connectivity. Are you the user of the AI system? What is the degree of control you have over it? Are third parties exposed due to your role in the chain? These questions will give you an idea of your company’s potential liability when using or implementing AI systems. It will also help define the roles and relationships with the different parties involved in an AI system.

2. Determine whether the AI system is “high-risk” or not

The potential liability increases drastically for “high-risk” AI systems. Determining this in advance will help you assess the level of risk and liability and determine whether it is acceptable according to your company’s risk appetite.

3. Have insurance in place

While future regulation may provide more clarity on the liability of AI systems, there will always be uncertainty. Insurance can help you mitigate the risk.

4. Focus on prevention and transparency

The best way to manage liability is to implement preventive measures to ensure compliance and transparency. For AI systems, this means implementing policies, documenting the decisions made, and providing as much transparency as possible about the functioning of the AI system.

***

This article is a chapter from Dentons’ Artificial Intelligence Guide 2022. Click here to access other chapters or download the full guide.

***

We would like to thank Ignacio Vela for his contribution to this article.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest articles by email.
Stay in Touch
Nieves Briz

About Nieves Briz

Nieves Briz is the Managing Partner of Dentons’ Barcelona office and Head of the Corporate and M&A practice team in Barcelona. Nieves has more than 30 years of experience advising some of the most relevant national and international corporations in Corporate and having represented clients in over 100 transactions across various sectors such as TMT, financial services, property, pharmaceuticals, baby food, retail, aviation, and energy.

All Articles Full bio

Allison Bender

About Allison Bender

Allison Bender is a Partner in Dentons' Washington office. Allison’s practice focuses on risks associated with data and technology. She advises companies and boards on cybersecurity, data governance, privacy compliance, product counselling, cyber preparedness, incident response, crisis management, and public policy. With nearly a decade of prior government experience at the US Department of Homeland Security before joining private practice, clients benefit from Allison’s insights on data, technology, and regulation from a cybersecurity and national security perspective, including on cutting-edge issues like artificial intelligence, big data, the blockchain and cryptocurrency, and biometrics. She leads the privacy and cybersecurity team for Dentons’ Venture Technology and Emerging Growth Companies practice.

All Articles Full bio

Related Articles

  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • Middle East
  • North America
  • Real Estate
  • Retail

Key challenges of artificial intelligence: AI privacy concerns and the GDPR

By Dariusz Czuchaj, Marc Elshof, and Anna Szczygiel
  • EMEA
  • Infrastructure

Regulating artificial intelligence in the EU: Top 10 issues for businesses to consider

By Giangiacomo Olivi and Chiara Bocchi
  • EMEA
  • Energy
  • Government
  • Infrastructure
  • Real Estate

Artificial intelligence in smart cities

By Nick Graham

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2022 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site