Dentons has conducted a survey of global business leaders on their organizations’ use of AI, as well as on the risks and opportunities presented by AI technologies. The results of the survey reveal that businesses around the world recognize the many benefits of AI. However, business leaders are beginning to ask serious questions about where the responsibility for good governance, regulation and compliance sits.
AI is unleashing value for companies
- 60% of companies reported that they already use, or are piloting AI in their business.
- 12% of companies are early adopters and are already using AI extensively within their business.
- 48% are at the beginning of their AI journey and are running pilot schemes across a range of business areas, with the most popular being CRM (24%), administration (19%) and sales (18%).
- Businesses recognize the benefits of AI, the biggest of which include saving time by automating processes (94%), generating data-driven business information for decision making (90%) and reducing human error in processing (89%).
- Not surprisingly, big business is further along its implementation journey with 70% of respondents using AI to some degree, compared to 50% of medium-sized businesses. However, medium sized businesses are seeing the same benefits of AI and are equally optimistic about its role in their future.
Businesses need strategies to better protect against the risks of rapid AI growth
While 60% are using AI in some form, only a fifth have a strategy they are executing against, while another 23% are currently formulating their strategy. The lack of strategic focus can mean that AI is being implemented without due consideration of the risks, the relevant legislation or the internal controls required to ensure it is well governed.
55% of organizations have guidelines or policies for processing both personal and non-personal data.
Managing the risks of AI
- 83% of organizations identify cost as a challenge. It is currently the greatest barrier to the implementation of artificial intelligence systems.
- 81% cited personal data protection as a significant concern. For example, the way in which data sources become interconnected or are interpreted by an AI can create data sets that breach data protection requirements.
- 81% acknowledge that human oversight is needed for AI systems, but 80% reported uncertainty over where the liability sits in a legal capacity for decisions as well as omissions made by an AI system.
- 68% believe the lack of trust in AI is a significant internal challenge they will need to overcome to make it a success.
- 57% expressed concerns about the potential for discrimination arising from the actions of an AI system. Although such bias may be inherent in the programming, it could also be inadvertently defined in the algorithms of a system, depending on the source data used to create the algorithms.
Looking for guidance
- Businesses are looking to regulators to provide protection mechanisms on the use of AI. The most urgently needed areas of guidance are in relation to privacy (61%), consumer protection (52%), criminal liability (46%) and intellectual property (45%).
- There is a lack of awareness about the regulatory landscape. Depending on the area of law, between 55% and 75% of respondents were unaware of the relevant legislation or even whether such legislation exists in their jurisdiction.
- 63% are unaware of which public body is empowered to regulate AI in their country.
- Businesses are seeking guidance internally—looking to their Legal and Compliance (82%) and IT (75%) teams to lead the way in drafting internal AI policies.
- 58% of respondents believe a company using AI, rather than the inventor, should have the rights over intellectual property it develops.
- 49% of respondents expressed the opinion that there should be some form of joint liability for acts and omissions of an AI system between the entity that developed an AI, the AI itself and the person/entity applying the AI.
- 73% of respondents believe there should be both mandatory AI insurance coverage and AI assurance tools (e.g. software ensuring that decisions of an AI system are recorded, auditable and monitored) for organizations using AI systems.
For more details, please refer to Dentons’ Artificial Intelligence Guide 2022 (available here).