Having reviewed both the opportunities and the legal and ethical risks related to the use of artificial intelligence, what’s next? To maximize the value that AI can bring to your organization, while avoiding the potential pitfalls, we strongly recommend developing a robust AI strategy for your business.
Our survey found that while 60% of respondents were using AI in some form, just under a fifth have such a strategy, while another 23% are currently formulating it. The lack of strategic focus can mean that AI is being implemented without due consideration of the risks, the relevant legislation or the internal controls required to ensure it is well-governed.
Six steps to create your AI strategy
1. Assemble an AI governance board
We recommend setting up an AI governance board to set the strategic direction, ethical framework, goals and funding of your AI program. This body will be responsible for driving and monitoring the implementation of the AI strategy across the organization.
This body should have direct involvement from your company’s top management in order to ensure that it has sufficient decision-making and policy-making clout. Furthermore, given the wide-ranging potential impact of AI systems, it should include senior people with diverse skills, such as IT, risk and compliance, HR, legal, data privacy, marketing/client experience, innovation and other areas.
Even with such a diverse team, it is likely that your AI governance board will need to seek professional advice from external consultants, lawyers or other experts on certain aspects of your strategy. And it should watch and take best practice from developing strategies and commercial initiatives from around the globe.
2. Define your AI objectives, use cases and ethical principles
Your AI strategy needs to start with your business priorities. Your AI taskforce should decide what business benefits you want to achieve with AI and what processes or parts of the business you are looking to automate. Audit where the business has already adopted AI-driven processes and technologies. Depending on the nature of your business, your ability to invest, the availability of skills, and your risk appetite, you may simply be looking for some efficiency gains, or you may be looking to make AI a core part of your product or business model.
The most common approach according to our survey, is to start small with a pilot project in a certain part or parts of the business. This can be an effective strategy, as it can allow you to limit your investment and risk and take learnings from the pilot to help shape a wider implementation.
When defining your objectives, it is also a good time to define your ethical principles for AI. Defining these principles upfront, early in the process, will enable you to implement an ethics by design approach to your AI projects and will also give you a benchmark against which to assess future AI use cases. Having clearly defined ethical principles will also help you in discussions with your customers, suppliers, regulators and other stakeholders.
3. Perform an AI risk assessment
Once you know what you want your AI to do and where you want to use it, it is time to assess the potential risks to your company, your customers, and other stakeholders.
In some cases, this risk assessment may be mandatory. For example, under both the GDPR and the Draft AI Act, you must conduct risk assessments in certain circumstances. In such cases, there will be certain steps you will need to take in order to comply with your obligations.
Regardless whether this risk assessment is mandatory or not, you should take a holistic view, assessing a wide range of variables that could be affected by your use of AI—including data privacy, liability, health and safety, IT security, operational issues and business continuity, financial risk, reputational risk, intellectual property, client relationship management, among others. Your risk assessment should also consider your ethical principles and whether the use of the AI system is aligned with those principles, or whether there are social, environmental or other ethical issues that need to be addressed. For instance, your AI system’s potential impact on vulnerable persons, taking into account their social and economic situation, should also be carefully considered.
Your risk assessment should also investigate potential liability, and how such liability could be shared among your suppliers and other relevant stakeholders in your AI value chain.
As discussed throughout this report, there is a wide range of possible risks that might not be immediately apparent, and therefore it is advisable to seek professional assistance in order to carry out a comprehensive assessment.
4. Implement controls and compliance measures
Having completed your risk assessment, you can then start putting in place measures to prevent or mitigate those risks. Certain steps that we have already covered—such as assembling an AI governance board and establishing ethical principles related to AI—will be important elements of your control measures.
One of the first controls will likely be to maintain an inventory of AI systems within your organization. Our study showed that 27% of respondents did not know where AI was used within their business. Such lack of awareness could lead to unforeseen risks down the line.
The AI systems should also be properly documented to provide as much information as possible about what the system does, how it works, and what data it relies upon. Not only is this a requirement under the Draft AI Act for high-risk systems, but it is also useful in disputes around potential liability or when making a case for intellectual property protection.
Your controls should also document what AI decisions require human oversight. They should also explain the nature of that oversight, and specify who is responsible for providing it.
To help provide such oversight, there are numerous AI monitoring and assurance tools that you may want to consider implementing. These can help you monitor and predict AI decisions and identify errors and anomalies that may affect the performance of your AI system.
Due to the lack of clarity around liability issues related to AI, as well as the potentially wide-ranging impact of an AI system failure or error, you may want to consider arranging appropriate insurance. Although such insurance is not currently mandatory, 73% of the respondents of our survey felt there should be not only mandatory insurance coverage but also mandatory assurance tools (e.g. software ensuring that decisions of an AI system are recorded, auditable and monitored). At the very least, one should seriously consider getting insurance cover for any critical business processes that involve AI technology.
Given the lack of regulation around AI, appropriate risk allocation provisions in your contracts can provide a measure of protection where such safeguards are not yet enshrined in legislation.
Your AI strategy should also address how you protect your AI intellectual property—including the AI system and programming, the data set, and potentially also creative works created by the AI system itself. As mentioned earlier in this report, current intellectual property legislation is not fit for purpose in addressing the unique features of AI technology, and it will take some time before solutions are developed, debated and enacted into law. In the meantime, you will need to apply a clever and creative mix of the tools currently available—including patents, trade secret measures and database protection mechanisms, as well as well-worded contracting terms—in order to protect your intellectual property to the fullest extent possible.
This is by no means a comprehensive list of the controls required, and the measures you apply will need to be adapted in order to address the specific issues outlined in your risk assessment. We strongly recommend seeking professional advice when establishing a control framework to ensure it covers all of your compliance obligations and key risks.
Avoiding pitfalls as you commence implementing AI
Shalini Kurapati, Founder and CEO, Clearbox AI
“Companies are naturally accelerating their pace of AI adoption since AI will be a major driver of economic prosperity in this decade.
Given the stakes, it’s tempting for companies to showcase AI projects in a context of ambition or grandeur of their innovation efforts. An estimated 50% of AI projects and 85% of deep learning based AI projects don’t progress into commercial production. To avoid an unsuccessful outcome, companies need to take an unglamorous step back to get the core processes right from both technical and organizational perspectives. From a technical perspective, they have to take a long and hard look at their data availability, data quality, computing infrastructure, automated tools for data and model assessment and monitoring, to record the lineage of model decisions through robust Data Ops and ML Ops pipelines.
From an organization’s perspective, management needs to have a pragmatic approach to AI adoption by allowing iterative test-and-learn methods to efficiently incorporate feedback. They should also identify possible risks associated with AI including security, privacy, compliance, ethics and fairness, right from the beginning and collaborate with the technology team to implement mitigation solutions such as regulatory sandboxes and the use of synthetic data from the beginning.”
5. Manage and maintain your data
To reiterate, artificial intelligence relies on data in order to learn and function, and therefore high-quality data is needed in order to ensure a well-functioning AI system. Furthermore, maintaining accurate and up-to-date data is also among your compliance obligations under the GDPR and other global data protection rules.
For these reasons, it is absolutely essential to invest the time and resources into regularly reviewing and maintaining your data set to ensure it is accurate and up to date.
6. Monitor, update and refine your strategy
Like any strategy, your AI strategy will evolve over time, as your company learns and gains experience with this technology. Due to the speed at which AI technologies are developing, and the raft of legislation in the works around the world, you will need to monitor your company’s use of AI carefully, while also keeping a sharp eye on the horizon.
Firstly, you will need to monitor your company’s use of AI to determine whether it is achieving your business objectives, to improve system performance and data-set quality, and to address and correct any unforeseen or adverse effects.
You will also need to monitor the legal landscape. As indicated in our survey, there is currently a lack of knowledge within the business community about the regulatory framework related to artificial intelligence. Depending on the area of law, between 55% and 75% of respondents were unaware of the relevant legislation, or even whether such legislation exists in their jurisdiction. Furthermore, 63% are not even aware of which public body is empowered to regulate/monitor AI in their country. Such lack of awareness can lead to compliance issues down the road. Therefore, it is vital to have a procedure in place to monitor current and draft legislation, not only in the country in which your company is headquartered, but in all jurisdictions where you operate. Given the complex issues at stake, it may be advisable to engage external counsel to explain what such legislation means for your business, plan for current and future compliance obligations, and integrate those into your AI strategy. Furthermore, for multinational companies, external counsel may be helpful in navigating the complex web of legislation across multiple jurisdictions.
Your monitoring program should also include developments in the artificial intelligence sector. This will help educate you and your team about the opportunities and challenges associated with AI and will also serve as inspiration to help identify potential use cases for your business.
These insights and market intelligence will feed into your strategy allowing you to develop and refine it over time.
***