Skip to content

Brought to you by

Dentons logo

Business Going Digital

Helping companies in the digital transformation of their business.

open menu close menu

Business Going Digital

  • Home
  • Regions
    • Asia Pacific
    • EMEA
    • Latin America and the Caribbean
    • North America
  • Sectors
    • Automotive
    • Energy
    • Financial Institutions
    • Government
    • Infrastructure
    • Manufacturing
    • Real Estate
    • Retail
  • Podcasts/Videos
    • Podcasts/Videos
    • The future of European AI regulation: Q&A with Brando Benifei
    • Artificial intelligence: EU Commission’s proposal
    • The EU VAT e-commerce package
    • Meeting the challenge of digitalization
  • Interactive tools
    • Interactive tools
    • Digital Signatures Tracker
    • Europe Cookie Law Comparison Tool
    • Global Autonomous Vehicles Index
    • Global FinTech Comparison Tool

Key challenges of artificial intelligence: AI ethics and governance

By Andy Lucas
December 2021
  • Africa
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail
Share on Facebook Share on Twitter Share via email Share on LinkedIn

Going back to the well-known driverless car dilemma, let’s take it a step further. Picture that a driverless car equipped with the latest artificial intelligence is driving down the road at considerable speed, when a pedestrian mindlessly steps into the road and into the path of the car. The car calculates that it can avoid the pedestrian and spare their life, but only by swerving onto the pavement and into another pedestrian.

The car is confronted with a terrible choice with severe ethical implications. Firstly, what decision would we want the car to make? How could we configure the car to value one life above another? And, more importantly, should we? If it makes an “incorrect” choice, who is responsible? What if the car inadvertently learns to prioritize one race or gender over another when making that choice? How could the AI’s complex decision-making processes be explained to a bewildered public?

Now to the realm of business. Granted, the driverless car scenario is an extreme example, and most businesses’ use of AI will not encounter such life-and-death scenarios, but the root causes of these AI ethical issues remain the same. Clearly, there is no inherent inevitability that AI will be a force for good—so, in a world of both increasing AI prevalence and heightened expectations on businesses to do the right thing, businesses will have to consider their use of AI, how this reflects on their ethics as a business and how this use impacts society at large. Waiting for government regulations to force the issue here simply will not do.


AI and Ethics: The search for “algorethics”

Paolo Benanti, Professor of Theology, Pontifical Gregorian University

“The ethical debate on AI must take into account all the factors that can direct technological innovation towards the common good: we need a system of “algorethics”.
First of all, it is important to create institutions to provide governance of artificial intelligence technologies. Only by creating institutions, with a clear mandate to carry out ethical dialogue and regulate technology, can we really conduct an objective search for what is good. Furthermore, such institutions require real political power to manage and regulate artificial intelligence technologies; otherwise, their proposals will lack any real effectiveness.
At the most basic level, the ethical reflection must consider the person as the subject of interaction with AI. We need to address concerns around the unpredictability of some technologies, humans’ ability to control the technological phenomenon, and the effects that these technologies can have on the individual.
A second area of ethical reflection is around the social relationships created between the AI systems, their users, and other members of society. It is important to strike the right balance between the right of individuals to pursue their own happiness, and the right to equality and freedom from discrimination.
The governance of technology needs to go hand in hand with development. Moral considerations cannot be relegated to the margins, or imposed as an afterthought via corrective measures, but need to be an integral part of the development of the AI itself—balancing business objectives with a focus on people.
By the very nature of technological innovation, that governance will require dialogue among people with knowledge of the empirical sciences, philosophy, moral-theological analyses and other forms of human knowledge. These stakeholders will need to interact in a constructive and coordinated way. Regulators, the academic world, and technology companies should work together on the implementation of AI governance that is effective, while allowing us to take full advantage of the opportunities offered by this exciting technology.”

What you need to know—ethical issues

There are, unfortunately for businesses, a whole host of potential pitfalls inherent in corporate use of AI that could land a company in ethical jeopardy. An obvious hazard is AI’s lack of an in-built human values system. It’s not AI’s fault—it is just a machine after all. But how do we ensure that the output of an AI reliably aligns with our company values?

We have already seen high-profile cases of AI unintentionally producing discriminatory results. For instance, a recent job application AI system for a large corporation favored male applicants based on a flawed assessment of past trends. And following on from this, as we discussed in the previous chapters, the liability and accountability for decisions of AI systems are murky, which does little to inspire trust in AI.

The potential for discrimination within AI systems was one of the concerns expressed in Dentons’ AI survey, with 57% of respondents highlighting this as a challenge.

As mentioned, there are serious privacy rights at stake here too, given that vast tranches of data are used to train AI. What if a business has the right to hold certain personal data, but this right does not extend to using the data to train the AI? Some of the world’s biggest companies have been caught misappropriating data, but this danger is present to all AI-wielding businesses.

Finally, there are wider societal issues to consider: what damage will full-scale adoption of role-replacing AI do to the workforce? And climate-wise, how does a purportedly “green” business reconcile its environmental credentials with the huge amounts of energy used to train AI models?

What you need to do—practical measures for businesses

Navigating issues caused by incredibly complex machines will expectedly require deep, refined business solutions. We recommend adopting the following governance measures to both mitigate the damage AI might cause and harness its benefits to contribute positively to your respective industry.

1. Set up an AI taskforce and governance board

We recommend establishing a governance board that is responsible for overseeing your AI strategy as well as defining your ethical framework for the use of AI.

2. Establish what AI ethics mean for your business

We recommend establishing high-level principles governing your company’s approach to AI ethics. These principles can be used as a basis upon which to ensure that all interactions with AI are aligned to your values. They must be specific to your business and industry, and relevant to the technology you use. These principles should be institutionalized in every level of the business, through leadership communications, conduct guidelines, training courses and reward systems. Employees should be empowered to raise any ethical concerns they encounter.

3. Conduct an AI ethics risk assessment and create an AI governance plan

We recommend conducting a comprehensive AI ethics risk assessment to identify where dangers are most likely to arise and pre-empt ethical issues. This may mean implementing changes into your governance processes. Introducing human oversight can vastly clarify the issue of accountability for AI decisions. These efforts should culminate in the creation of an AI governance plan covering short and medium-term goals and guidance as well as the long-term direction with respect to AI. The plan should be updated frequently, with an appropriately qualified person holding ownership over the plan.

4. Monitor the impact of AI

Even with the best governance processes in the world, an AI ethical issue may still arise. It is important to assess potential risks before implementing an AI system, and then monitor the impact of the system once it is in place to identify issues and mitigate damage as quickly as possible.

One important impact to consider is the potential for job displacement. HR should be engaged at an early stage to anticipate any change in job roles and to assist in adapting and/or retraining the workforce accordingly.

5. Enhance data sets and data privacy

Ensure whatever AI you use is trained on the most complete, current and representative data sets—this will give it the greatest chance of performing well, and reduce the potential for algorithmic bias and discrimination. Various online tools can be used to detect and mitigate against such bias. Furthermore, ensure data protection and privacy procedures are robust and transparent—customers should be able to easily find out how their data is being used, especially when it comes to AI.

6. Adopt an ethical approach to AI providers

Lastly, adopt an ethical approach to third-party AI providers, by carrying out due diligence on future business partners and asking suppliers and partners about their use of AI to check whether they maintain similar ethical standards to your own.

***

This article is a chapter from Dentons’ Artificial Intelligence Guide 2022. Click here to access other chapters or download the full guide.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest articles by email.
Stay in Touch
Andy Lucas

About Andy Lucas

Andy Lucas is a Partner and Head of Dentons' Technology, Media and Telecoms Department in the UK.

All Articles Full bio

Related Articles

  • Asia Pacific
  • EMEA
  • Government
  • Latin America and the Caribbean
  • North America

E-government: Success factors for efficient digitalization of public services

By Igor Ostrowski
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Manufacturing
  • Real Estate
  • Retail

Enforcing IP rights in a digital environment

By Dr. Stefan Dittmer and Dr. Constantin Rehaag
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Latin America and the Caribbean
  • Manufacturing
  • North America
  • Real Estate
  • Retail

Blockchain basics: law

By Adam Brown, Dan Burge, and Charles Wood

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2023 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site