Skip to content

Brought to you by

Dentons logo

Business Going Digital

Helping companies in the digital transformation of their business.

open menu close menu

Business Going Digital

  • Home
  • Regions
    • Asia Pacific
    • EMEA
    • Latin America and the Caribbean
    • North America
  • Sectors
    • Automotive
    • Energy
    • Financial Institutions
    • Government
    • Infrastructure
    • Manufacturing
    • Real Estate
    • Retail
  • Podcasts/Videos
    • Podcasts/Videos
    • The future of European AI regulation: Q&A with Brando Benifei
    • Artificial intelligence: EU Commission’s proposal
    • The EU VAT e-commerce package
    • Meeting the challenge of digitalization
  • Interactive tools
    • Interactive tools
    • Digital Signatures Tracker
    • Europe Cookie Law Comparison Tool
    • Global Autonomous Vehicles Index
    • Global FinTech Comparison Tool

Key challenges of artificial intelligence: Contracting for the purchase and use of AI

By Domien Kriger and Giangiacomo Olivi
December 2021
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail
Share on Facebook Share on Twitter Share via email Share on LinkedIn

AI continues to assume greater importance in the framework of contracting and contract review. There are already products on the market that use AI-driven technologies to review contract terms. Such tools can be used to review large numbers of contracts in a fraction of the time that it would take lawyers to review each one manually.

Some legal commentators even foresee a time in the not-too-distant future, when AI systems could be able to enter into contracts and would accordingly be granted an autonomous legal personality.

While the debate over AI’s potential role in concluding contracts remains largely theoretical, new contract standards are being shaped in order to address and accommodate AI-based products and services and the challenges and benefits they bring.

Here we will explore a few issues to consider when it comes to contracting in relation to the purchase or use of AI technologies.

  • Choice of law: In general, the first point to consider when drafting a contract is which law to apply. This is particularly relevant if we consider the general lack of local laws focusing on AI products and services. A poorly considered choice of law may lead to the risk of adding (unknown) implied terms into the contract. For instance, in certain civil law countries an agreement for the supply of AI-based services may trigger the application of a mix of codified rules relating to sale and purchase, lease and services agreements, in addition to other implied provisions relating to IP rights, warranties and liability, among others. Choosing a familiar applicable law (and one where the ability to litigate and recover effectively is known) is therefore fundamental. It is also important to take into account which national (and supranational) laws and strategies may be enacted in the near future, so it is advisable to monitor draft legislation in the market(s) where your company operates.
  • Audits and automated controls: AI should be transparent, understandable, explainable and, we would add, manageable throughout the contract life. It is therefore fundamental to devise specific protocols and criteria for audits and controls, including, where appropriate, procedures for resorting to third-party technology consultants and forensic experts to ensure adherence to certain fundamental ethical and legal principles (and requirements, where available). This requires access to underlying decision logic and system logs, as well as the ability to question relevant subject-matter experts. In certain cases, one might consider having AI systems to control other AI systems (with human supervision); for instance, certain “black boxes” could allow continuous and automated data reviews, well beyond the traditional root-cause and behavioral model analyses. Audits and controls should also factor in the increasing scrutiny that will likely be exercised by local authorities, including for instance the AI national supervisory authorities to be set up under the Draft AI Act.
  • Representations and warranties: Representations and warranties should be AI specific and should address the potential business impact as well as broader risks associated with the use of AI technology. These may include, for example, a warranty under which the seller warrants that the AI technology will not infringe a patent claim, or, if the AI technology is embedded in tools provided to employees or customers, that there are no risks for human beings exposed to the AI technologies, that the logic is not discriminatory, etc. A thorough risk assessment— to be carried out before entering into the contract—could certainly help address potential risks and allow further tailoring of the required representations and warranties.
  • Indemnification, limitation of liability and insurance: These considerations are even more applicable to liability and indemnification issues. The parties should be aware of the current lack of specific AI liability frameworks, and should fill the gap by applying basic legal principles of liability and clearly allocating responsibility between the parties. Specific limitations of liability could be considered for the different use cases. For example, risk and liability related to the malfunction of an AI-powered production line may well differ from the liability resulting from incorrect data feeding or AI-technology accidentally disclosing users’ personal information. That said, it is a common understanding that AI-based services are, in principle, more reliable that human-based services, but AI failures, when they occur, are more likely to be very significant to catastrophic (e.g. resulting in a total service outage). The parties to an AI product or services contract will therefore have to be more creative and take additional time when drafting a services contract in order to think through such potential events and properly address the consequences in advance, also considering ad hoc insurance coverage to hedge potentially very significant damages.
  • Service-level agreement (SLAs): AI is fundamentally about improving performance. Traditional SLAs should be devised to address changes over time, further addressing the advantages that can be sought, in addition to the traditional service reliability, availability and maintainability. For instance, in addition to the provisions designed to address AI failures, SLAs could also qualify or quantify the expected results of the use of AI technology, e.g. from increased revenues up to even improved well-being.
  • Knowledge transfer: It used to be an acceptable practice in the technology sector that some of the employees involved in providing an outsourced service may transfer to the supplier when starting the service and then transfer back to the employer-customer at termination of the term. This supported a smooth transfer of relevant process and service know-how. Within an AI services scenario, there may well be no employees to transfer, and the software and related improvements may stay with the AI services supplier. The contract should apply creative ways in which to address this gap, and more widely how to share know-how and improvements.
  • IP, data management and training: Besides prevalent personal data protection issues (including security, transparency, fair processing, data transfers, etc.), the contract should properly address input data and output data, including who will own the data, improvements and related IP rights. Data sets may be also be regarded as an “essential facility,” thus triggering antitrust implications. There was a notorious chatbot incident where, in less than 24 hours, people interacting with an AI-powered chatbot taught it to respond in a variety of culturally prejudicial and offensive ways. The parties should always consider how to avoid inputting non-ethical and/or poor quality data or parameters into AI. Clauses should be devised to properly address the training of AI systems, to prevent “bad data pollution.”
  • Dual-use: Dual-use regulations are particularly complex within the EU and worldwide, and may also be subject to sudden changes. Considering that most AI products/services may be used for both civil and military purposes, AI contracts should take into account all dual-use implications and properly address further regulatory developments.
  • Ethics and reputation: AI can be a risky business, which often leads to an “emotional” response by the involved stakeholders. Therefore, you should carefully scrutinize the business partners and suppliers you collaborate with. For example, even if the contracted services are perfectly legitimate, the mere fact that an AI supplier may have had financial backing from a state military agency in the past may cause significant public concern. In addition to a broad pre-contractual due diligence, specific provisions should properly address potential reputational damage through an ethics by design approach, with adequate contractual measures to prevent certain behaviors (e.g. review of the supplier’s “AI principles governance,” having specific undertakings comply with certain ethical principles, etc.).

***

This article is a chapter from Dentons’ Artificial Intelligence Guide 2022. Click here to access other chapters or download the full guide.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest articles by email.
Stay in Touch
Domien Kriger

About Domien Kriger

Domien is a junior associate in Dentons’ Brussels office and a member of the Brussels Bar.

All Articles Full bio

Giangiacomo Olivi

About Giangiacomo Olivi

Giangiacomo Olivi is a partner in Dentons’ Milan office, Europe Co-head of the Data Privacy and Cybersecurity group and Europe Co-head of the Media sector group. He is a member of the global Intellectual Property and Technology practice.

All Articles Full bio

Related Articles

  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

AI strategy: Six steps to create your artificial intelligence road map

By Amanda Lowe and Giangiacomo Olivi
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Manufacturing
  • Real Estate
  • Retail

The future of European AI regulation: Q&A with Brando Benifei, member of the European Parliament and co-Rapporteur on the AI regulation

By Giangiacomo Olivi
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

Key challenges of artificial intelligence. Intellectual property: Protecting your AI and its creations

By Ertugrul Akinci, Kagan Dora, and Allison Bender

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2023 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site