AI continues to assume greater importance in the framework of contracting and contract review. There are already products on the market that use AI-driven technologies to review contract terms. Such tools can be used to review large numbers of contracts in a fraction of the time that it would take lawyers to review each one manually.
Some legal commentators even foresee a time in the not-too-distant future, when AI systems could be able to enter into contracts and would accordingly be granted an autonomous legal personality.
While the debate over AI’s potential role in concluding contracts remains largely theoretical, new contract standards are being shaped in order to address and accommodate AI-based products and services and the challenges and benefits they bring.
Here we will explore a few issues to consider when it comes to contracting in relation to the purchase or use of AI technologies.
- Choice of law: In general, the first point to consider when drafting a contract is which law to apply. This is particularly relevant if we consider the general lack of local laws focusing on AI products and services. A poorly considered choice of law may lead to the risk of adding (unknown) implied terms into the contract. For instance, in certain civil law countries an agreement for the supply of AI-based services may trigger the application of a mix of codified rules relating to sale and purchase, lease and services agreements, in addition to other implied provisions relating to IP rights, warranties and liability, among others. Choosing a familiar applicable law (and one where the ability to litigate and recover effectively is known) is therefore fundamental. It is also important to take into account which national (and supranational) laws and strategies may be enacted in the near future, so it is advisable to monitor draft legislation in the market(s) where your company operates.
- Audits and automated controls: AI should be transparent, understandable, explainable and, we would add, manageable throughout the contract life. It is therefore fundamental to devise specific protocols and criteria for audits and controls, including, where appropriate, procedures for resorting to third-party technology consultants and forensic experts to ensure adherence to certain fundamental ethical and legal principles (and requirements, where available). This requires access to underlying decision logic and system logs, as well as the ability to question relevant subject-matter experts. In certain cases, one might consider having AI systems to control other AI systems (with human supervision); for instance, certain “black boxes” could allow continuous and automated data reviews, well beyond the traditional root-cause and behavioral model analyses. Audits and controls should also factor in the increasing scrutiny that will likely be exercised by local authorities, including for instance the AI national supervisory authorities to be set up under the Draft AI Act.
- Representations and warranties: Representations and warranties should be AI specific and should address the potential business impact as well as broader risks associated with the use of AI technology. These may include, for example, a warranty under which the seller warrants that the AI technology will not infringe a patent claim, or, if the AI technology is embedded in tools provided to employees or customers, that there are no risks for human beings exposed to the AI technologies, that the logic is not discriminatory, etc. A thorough risk assessment— to be carried out before entering into the contract—could certainly help address potential risks and allow further tailoring of the required representations and warranties.
- Indemnification, limitation of liability and insurance: These considerations are even more applicable to liability and indemnification issues. The parties should be aware of the current lack of specific AI liability frameworks, and should fill the gap by applying basic legal principles of liability and clearly allocating responsibility between the parties. Specific limitations of liability could be considered for the different use cases. For example, risk and liability related to the malfunction of an AI-powered production line may well differ from the liability resulting from incorrect data feeding or AI-technology accidentally disclosing users’ personal information. That said, it is a common understanding that AI-based services are, in principle, more reliable that human-based services, but AI failures, when they occur, are more likely to be very significant to catastrophic (e.g. resulting in a total service outage). The parties to an AI product or services contract will therefore have to be more creative and take additional time when drafting a services contract in order to think through such potential events and properly address the consequences in advance, also considering ad hoc insurance coverage to hedge potentially very significant damages.
- Service-level agreement (SLAs): AI is fundamentally about improving performance. Traditional SLAs should be devised to address changes over time, further addressing the advantages that can be sought, in addition to the traditional service reliability, availability and maintainability. For instance, in addition to the provisions designed to address AI failures, SLAs could also qualify or quantify the expected results of the use of AI technology, e.g. from increased revenues up to even improved well-being.
- Knowledge transfer: It used to be an acceptable practice in the technology sector that some of the employees involved in providing an outsourced service may transfer to the supplier when starting the service and then transfer back to the employer-customer at termination of the term. This supported a smooth transfer of relevant process and service know-how. Within an AI services scenario, there may well be no employees to transfer, and the software and related improvements may stay with the AI services supplier. The contract should apply creative ways in which to address this gap, and more widely how to share know-how and improvements.
- IP, data management and training: Besides prevalent personal data protection issues (including security, transparency, fair processing, data transfers, etc.), the contract should properly address input data and output data, including who will own the data, improvements and related IP rights. Data sets may be also be regarded as an “essential facility,” thus triggering antitrust implications. There was a notorious chatbot incident where, in less than 24 hours, people interacting with an AI-powered chatbot taught it to respond in a variety of culturally prejudicial and offensive ways. The parties should always consider how to avoid inputting non-ethical and/or poor quality data or parameters into AI. Clauses should be devised to properly address the training of AI systems, to prevent “bad data pollution.”
- Dual-use: Dual-use regulations are particularly complex within the EU and worldwide, and may also be subject to sudden changes. Considering that most AI products/services may be used for both civil and military purposes, AI contracts should take into account all dual-use implications and properly address further regulatory developments.
- Ethics and reputation: AI can be a risky business, which often leads to an “emotional” response by the involved stakeholders. Therefore, you should carefully scrutinize the business partners and suppliers you collaborate with. For example, even if the contracted services are perfectly legitimate, the mere fact that an AI supplier may have had financial backing from a state military agency in the past may cause significant public concern. In addition to a broad pre-contractual due diligence, specific provisions should properly address potential reputational damage through an ethics by design approach, with adequate contractual measures to prevent certain behaviors (e.g. review of the supplier’s “AI principles governance,” having specific undertakings comply with certain ethical principles, etc.).
***