Artificial intelligence, or AI, has already become a part of everyday life: from text and speech recognition – to targeted online advertising – to customer service “bots”. It has been predicted that by 2020, 20% of all global business content would be authored by machines and that AI bots will power 85% of all customer service interactions.[1] Thus it is an important area to consider when engaging with the digital consumer.
So what is artificial intelligence?
Artificial intelligence refers to systems exhibiting behaviors which are usually associated with human beings, rather than machines. We think of machines as only able to perform certain restricted tasks, as they were originally designed or programmed to do. AI, on the other hand, is able to learn from information received (such as data from the environment received through embedded sensors in the Internet of Things (IoT)) and to respond differently when it is given new data.
In headline terms, you could say that AI exhibits “intelligence”, meaning that it has the ability to accomplish complex goals.[2] The global technology research and advisory company Gartner defines Artificial Intelligence as: …technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance (also known as cognitive computing) or replacing people on execution of nonroutine tasks.[3]
Machine learning
Machine learning is a popular type of AI. It involves the development of algorithms which are fed with input data. In “supervised” learning, the data is labelled and the algorithms are effectively taught what the correct answer is. After a period of training the algorithm, it can then be deployed to make predictions based on the model which it has been taught. In “unsupervised” learning, the algorithms are not fed with labelled data and are left to their own devices to find patterns and build a model for themselves as to how they will process any new data received.
The European Commission noted recently[7] that AI can play a significant part in the digitalization of traditional analog businesses and has promoted, and contributed significant funding to a number of projects, including:
- Manufacturing: AI can predict maintenance and breakdowns in smart factories. SERENA uses AI techniques to predict maintenance needs of industrial equipment.
- Agriculture: AI can help achieve better productivity and minimize the use of expensive fertilizers, pesticides and irrigation, whilst reducing environmental impact. MARS is a mobile robot which plants seeds and workers can monitor the process remotely.
- Transport: AI can minimize wheel friction of a train against the track whilst maximizing speed and can enable autonomous driving. Transforming Transport is an initiative which will involve smart motorways and proactive rails, amongst other efficiencies.
Solving problems, driving efficiencies
AI has enabled companies to provide better decision making than would have been possible using human judgment alone, or to make the best use of limited available data. Some practical examples where we are seeing these evolutions and improvements in practice are:
- Customer service: Amazon has deployed neural networks to generate personalized product recommendations for customers, bridging the gap between their huge product catalog and sparse datasets for each individual customer, as a result of the small amount of products any individual typically purchases;[4]
- Medical diagnostics: A CNN, or deep convolutional neural networks – an AI system based on neural networks, is capable of classifying skin cancer with a level of competence allegedly comparable to dermatologists. It can be run on a smartphone, therefore potentially providing universal access to low-cost diagnostic advice.[5]
- Cyber security: AI2, developed at MIT’s Computer Science and Artificial Intelligence Laboratory, scans and reviews tens of millions of log lines each day and pinpoints anything suspicious to be escalated to a human being. AI2 successfully identifies 86% of attacks whilst sparing analysts the time and effort of following up on false alarms.[6]
Big data and privacy
The European Political Strategy Centre has identified three ingredients that have led to the rapid advancement of AI in recent years – stronger computational power, more sophisticated algorithms and higher availability of vast amounts of data.[8]
Data is the fuel upon which AI runs, so it is crucial to ensure that the datasets available to AI systems are of high quality and are compliant with applicable privacy laws. Practical applications of AI engaging privacy laws may be psychometric testing services, employee recruitment and monitoring technologies and customer insights for retailers.
AI and the GDPR
As noted here, the GDPR governs the use of personal data. The use of AI, particularly when deployed on large, rapidly updating datasets which comprise different data sources – often referred to as Big Data – can create a number of challenges where these datasets contain personal data.[9]
LAWFUL PROCESSING UNDER GDPR
One of these challenges is ensuring that processing is fair, lawful and transparent. Firstly, this means that its effects must be explained to the data subjects whose data is being processed by AI, particularly where there is automated decision making involved as a result of using the AI, including profiling. This requirement can be met in part by ensuring there is an appropriate privacy notice. However, fairness and transparency is broader than this and it will mean that any processing should be within the data subjects’ reasonable expectations, based on what they have already been told.
Secondly, there must also be an appropriate lawful basis for carrying out processing by AI. Where relying on consent, the consent has to be freely given, specific, informed and unambiguous. This may prove a challenge where the use of the data by the AI is unclear, or if the data subject is insufficiently informed about the consequences of the processing. All of this goes back to the importance of taking innovative and creative approaches to producing an informative and well-drafted privacy notice and continuing to inform data subjects in an effective way.
DATA CONTROLLERS AND AI
Data controllers are under an obligation to ensure that they comply with the Accountability Principle, meaning that they not only have to comply, but must show how they are complying with the GDPR. They must ensure, for example, that privacy is built into systems by design and default, that appropriate security measures have been implemented and that contractual relationships with any third parties processing data on their behalf (e.g. vendors providing AI technologies) contain the required mandatory clauses. Carrying out a Data Protection Impact Assessment before beginning the use of AI is an important step in helping to identify risks and implement safeguards.
Where the data is fully anonymized, it is no longer be personal data and therefore the GDPR does not apply. However, the act of anonymizing the data before storing in a data warehouse for use in training an algorithm can be a form of processing in itself and, in this case, the GDPR does apply. Companies will need to pay close attention to advances in technology around re-identification of datasets to ensure that any anonymization applied remains effective.
AI and other data laws
The use of data in AI technologies will be regulated by further legislation in the EU’s Digital Single Market strategy, such as the forthcoming e-Privacy Regulation and the Regulation on free flow of non-personal data, aimed at removing obstacles to the free movement of non-personal data, which came into force on 28 May 2019.
Regulating robots and artificial intelligence
The spread of robots and artificial intelligence creates numerous challenges for society and for the legal system. As robots begin to permeate all areas of life, we need to anticipate potential challenges and develop an approach to how the human-robot relationship will be regulated by law. Numerous proposals from around the world have sought to respond to this need, including the 23 Asilomar AI Principles, the European Charter on Robotics, and many others.
In 2016, Dentons was commissioned by Grishin Robotics to develop the concept of the first draft law on robotics in Russia. When developing the concept, the team not only looked to legal precedent, but also sought inspiration from science fiction, including Isaac Asimov’s Laws of Robotics. The resulting document sparked debate about the regulation of this emerging field of technology.
The following year, the Dentons team decided to build on this experience and think bigger, by looking at the issue from a global perspective. The result was a draft for the first international convention on robotics, comprising 42 rules regulating people’s relationships in connection with the active development of cyber-physical systems.
The convention consists of a single set of rules uniting all of the currently existing approaches to regulating AI and robotics, including the “black box” and the “red button” for robots, problems of security and confidentiality, and identification of robots. They also include new proposals to identify a category of higher-risk robots. Finally, the convention considers the regulation of AI and military robots, as well as issues of international cooperation in developing robotics in different countries.
The convention has been debated in business, academic and political circles, published in academic journals, and sent as a proposal to the United Nations.
Furthermore, in 2017 Dentons lawyers helped establish the Robopravo research center dedicated to the regulation of AI and robots. The center has developed several draft laws and publishes a journal on the regulation of new technologies.
Liability and safety
Liability can arise for AI-enabled products – for example, in the Internet of Things (IoT), AI-enabled robots or autonomous vehicles and systems. A practical example may be a smart home environment or a self-driving car.
Liability could include:[10]
- Contractual – that is based on the contract between a consumer and a retailer.
- Strict liability – for example at EU level, based on the Product Liability Directive, as implemented in local Member States and which applies a strict liability regime to “producers” where a defective product causes damage to victims, such as personal injury, death or damage to property.
- Fault-based liability – for example under local law, such as the law of negligence in the UK.
The “EU safety framework” comprises a number of Directives such as the Machinery Directive, the Radio Equipment Directive, the Product Liability Directive and more detailed rules, for example around medical devices and toys. These should be considered in the context of any AI-enabled products which are developed or placed on the market. The Commission is currently assessing the EU safety framework in light of technological developments to determine whether further regulation will be needed.[11]
Intellectual property
In this article, we examined the challenges of enforcing IP rights in a digital environment. Using AI to create works can also have implications on intellectual property rights such as on patentability, copyright and right ownership.[12] One issue which may arise is where AI has been used to create or enrich databases which the company wishes to protect, or where the AI has been trained using such databases.
The future of AI regulation
The law on AI is still in a state of flux. Whilst regulatory frameworks exist around product safety and the protection of personal data, the development of AI systems themselves and the content of algorithms are not, at present, regulated.
However, in February 2020 the European Commission released their White Paper on Artificial Intelligence – A European approach to excellence and trust exploring the possibility of a future regulatory framework. In addition, ethical frameworks have already been advanced: for example, the High Level Expert Group (HLEG) on Artificial Intelligence, set up by the European Commission, published the Ethics Guidelines for Trustworthy AI in April 2019.
The next evolution of digital transformation
The use of AI and AI-enabled technologies presents unrivalled opportunities for businesses exploring the digital world to leverage their troves of data and create efficiencies internally (e.g. in employee engagement or maintenance of equipment). It can also present a new edge in interfacing with digital consumers, whether by making better judgment calls in terms of advertising, or offering a new and better AI-enabled product. Whichever way companies engage with AI, it is bound to transform the business world in ways in which we cannot even yet anticipate.
- Gartner, “Top 10 Strategic Predictions for 2017 and Beyond: The Storm Winds of Digital Disruption”, October 2016.
- Max Tegmark, Life 3.0 – Being Human In the Age of Artificial Intelligence, Allen Lane, 2017
- Gartner IT Glossary, https://www.gartner.com/it-glossary/artificial-intelligence/
- https://aws.amazon.com/blogs/big-data/generating-recommendations-at-amazon-scale-with-apache-spark-and-amazon-dsstne/
- https://www.nature.com/articles/nature21056
- https://www.wired.com/2016/04/mits-teaching-ai-help-analysts-stop-cyberattacks/
- Digital Single Market: Artificial Intelligence for Europe, European Commission, 24 April 2018
- The Age of Artificial Intelligence: Towards a European Strategy for Human-Centric Machines, European Political Strategy Centre, 27 March 2018
- These issues are explored in more detail in the UK Information Commissioner’s Office Paper on Big Data, Artificial Intelligence, Machine Learning and Data Protection, 2017
- Categories are based on the Commission Staff Working Document – Liability for Emerging Technologies (SWD (2018) 137), 25 April 2018.
- Communication from the Commission to the European Parliament, The European Council, The Council, The European Economic and Social Committee and the Committee of the Regions – Artificial Intelligence for Europe (COM (2018) 237), 25 April 2018.
- Ibid, Footnote 52.