The deployment of AI-enabled technologies has become a key focus in many societies and an important driver for growth and development of high skill, technology-focused economies. AI has the potential to transform business models and operations, jobs and the delivery of public services. If developed correctly, AI will directly lead to higher productivity and economic growth.
Despite the huge benefits that AI presents, there are many challenges that need to be addressed, and governments around the world are each making their own policy decisions as they grapple with the same issues.
The creation of a coherent AI strategy is incredibly complex due to the inherent juggling act involved. Governments must decide where to strike the balance between innovation and regulation—weighing the importance of maintaining some control and apportioning accountability for these powerful new AI systems against the stifling impact of over-regulation.
Our survey respondents overwhelmingly supported a balanced approach to policy making, with two-thirds calling for a hybrid strategy that encourages innovation while also protecting the public interest.
Many governments have publicized official AI frameworks ranging from executive orders (US), development plans (China), legislation (Europe) and long-term strategies (UK, many European countries, and elsewhere). There are vastly different approaches—some countries favor official legislation and regulation, while others take a lighter touch approach via industry-led whitepapers, guidelines or policy documents.
In terms of priorities, some countries have chosen to focus on areas in which they are already industry leaders and develop AI in these sectors in order to maintain their leadership. Others have chosen to focus on developing AI systems in a human-centric way and way that benefits that nation as a whole. The UK’s recent strategy focuses on global collaboration and standards. In Europe, all countries acknowledge that local strategies are more effective if part of a broader cooperation framework. In this respect, the EU is playing a key role for its member states by harmonizing strategies across the region.
One consistent theme seems to be a gradual but persistent movement away from the technical and economic-based challenges to a focus on ethics, privacy, safety, transparency and accountability.
For details about the specific regulations in existence as at the publishing of this guide, please refer to our Country Snapshots Guide (download here).
32 out of 36 countries reviewed
have already developed their AI strategies.
- In 2016, the US hosted five different workshops with academic leaders on the social, ethical, technological and economic aspects of AI, from which they drew conclusions on the development of AI in the US and published a development plan as a direct result.
- Japan has decided to focus part of their AI efforts on robotics, as they are leaders in this area.
- The UK plans to pilot an “AI Standards Hub” to coordinate the country’s engagement in AI standardization globally, and explore with stakeholders the development of an AI standards engagement toolkit.
- The UAE AI Strategy 2031 focuses on developing legislation and regulation around AI and particularly on implementing AI education in high schools and universities.
- Denmark has set one of the targets in its AI strategy to use AI in care homes, filling a need resulting from an ageing population and lack of young caregivers.
- Canada has created “AI superclusters” to attract private funding and retain talent. Additionally, they have put systems in place to transfer IP from academic labs to commercial enterprises to speed up innovation and commercialization.
- San Francisco debated taxing robots that take jobs off individuals, in order to potentially give money back to displaced workers.
- To become a world leader in AI, China built a US$12.2 billion AI development park in Beijing to house 400 enterprises. Tianjin has created a US$16 billion AI fund to support the AI industry.
Funding and investment
The allocation of budgets and funding is another key challenge. Should the funding be used to upskill the labor force? Should it be pumped into creating a data set that is vast and inclusive enough to ensure all AI can be adequately used and tested? Should it go into an awareness campaign to increase public trust in AI? Should it be used to fund research and development?
Regardless of the policy choices, it is clear that government investment will not be enough to make real headway in the AI race; private sector funding is also vital. For example, in the third quarter of 2020 there were investments of US$39.5 billion in artificial intelligence startups in North America with 95% of this money coming from US companies. This level of investment has undoubtedly helped put the US ahead of the game.
AI strategies often dedicate significant amounts of a government’s annual budget to commissioning specific bodies to develop and oversee AI roadmaps, policies and guidelines. Additionally, some countries have gone further and have introduced government incentives and tax benefits to boost AI research, development and deployment.
Some countries are focused on preparing the economy and workforce for the AI revolution by investing in R&D and encouraging collaboration by creating multidiscipline groups that determine how the future should look. Alongside this, many are investing in universities as well as in science, technology, engineering and mathematics at all levels of education.
Skills and labor gaps
In most countries, AI development is hindered by a severe lack of AI researchers, software developers and data scientists. Countries that are falling behind in AI development are, in turn, less attractive to that small group of skilled individuals because they cannot offer the same salaries or opportunities—creating a vicious circle. Small businesses face similar limitations in attracting talent.
There are also legitimate fears over job losses and increasing economic disparity as AI replaces certain types of work. Across all industries, there is a pressing need to plan for the inevitable transition and retraining of displaced workers.
The talent issue is one in which governments, businesses and universities have an important role to play. We are witnessing the emergence of creative solutions, such as technical fellowships, which allow experienced tech specialists to impart their knowledge to others, or rotation schemes, which place technologists into political offices to help the public sector learn technical skills and discover ways to incorporate AI systems into their work.
- The Canadian government has introduced policies to make immigration an easier and more open process for those with AI-related skill sets, including the Ontario Express Entry and the Global Talent Stream.
- The UAE “Think AI” project was a series of roundtables between government accelerators and the private sector to keep track of AI’s impact on job losses and discuss key areas of improvements.
- In Australia, 59% of companies have determined a need for AI specialists and are using AI-as-a-service technology to tap into external AI capabilities without needing to develop their own in-house expertise.
- In the EU, many businesses are partnering with various governments to upskill, reskill and reassign workers who have suffered, lost or changed jobs due to AI implementation. The EU has also created a high level expert group that makes policy recommendations to address ethical, legal, social and economic AI issues, such as the changing job market.
- The UK has created multiple new concepts to ensure they attract the best talent from around the world. Examples include a new High Potential Individual visa, which greatly simplifies the process for internationally mobile individuals with the skills the country needs. Eligibility will be open to applicants who have graduated from a top global university, with no job offer requirement. A new Global Business Mobility Visa will also allow overseas AI businessesgreater flexibility in transferring workers to the UK in order to establish and expand their businesses there.
- Finland conducted a “grand experiment” where tens of thousands of non-tech experts were taught basic concepts of AI. The experiment’s aim was to repurpose Finland’s economy toward high-end applications of artificial intelligence. The government could then determine what would be beneficial and where to invest.
Public trust and ethics
Governments have a challenging role to play in ensuring that there is sufficient public trust and support for the use and development of AI. This is absolutely crucial: our survey indicated that over two-thirds believe the lack of trust in AI is a significant barrier to implementation.
68% believe that lack of trust in AI is a significant barrier to implementation.
This task involves another balancing act, whereby regulators need to determine the trade-off between transparency and system vulnerability. While explaining how a system works is an important step in building trust, there is a risk of exposing AI systems to hacking and manipulation, as well as the potential loss of trade secrets related to the AI algorithms. Furthermore, AI programmers and users may be very protective about their systems and reluctant to receive external advice on potential development and enhancement.
However, given the inherent risk of bias within systems, it is vitally important that we have protocols and policies in place that allow external forces to monitor, examine and guide systems to ensure they are ethical and human-centric.
Data is critical to the creation, training, improvement and use of AI systems. However, there are many problems associated with data when it comes to advancing AI, including accessibility, quality, ethics and regulation. Governments need to balance businesses’ and governments’ need to access data when developing and testing AI with individual rights to privacy.
Again, we are seeing very different approaches. Some countries are making data widely and easily accessible, while others are developing legislation which prioritizes the right to privacy—the EU’s GDPR being the prime example. It goes without saying that countries with more relaxed regulations around data can advance their AI much faster. However, many feel the risk to privacy is not worth the benefit, and are seeking creative alternatives in order to support AI development.
- Many countries have created a regulatory sandbox, which allows businesses to use data sets in order to test innovative ideas under existing protections.
- Due to the huge number of SMEs in India, the government has been working with the World Economic Forum to create a democratized database to enable AI application creation through improved data accessibility.
- Data-set sharing is a key element for most strategies. For instance, the new UK strategy includes a very large focus on data and ensuring that data is accessible to aid the long-term needs of the AI ecosystem. This includes a specific action to “consider what valuable data sets the government should purposefully incentivize or curate that will accelerate the development of valuable AI applications.”
- Regulatory authorities are adapting to the new requirements. For instance, the Italian Data Protection Authority is one of several such offices to create an AI department.
A lack of awareness
Despite the flurry of government activity going on around the world, there is a clear lack of awareness of the business community with regard to the government’s efforts to promote and regulate AI. According to our survey, depending on the area of law, between 55% and 75% of respondents were unaware of the relevant legislation or even whether such legislation existed in their jurisdiction.
Equally concerning, 63% of respondents were not aware of the public body empowered to regulate or monitor AI in their country.
As we have discussed, to develop and execute an effective AI strategy, governments and the business community both have an important role to play, and this lack of awareness could indicate a need to come together for a robust dialogue around this important issue.
Furthermore, in order to mitigate potential compliance risks down the line, companies are recommended to monitor regulatory developments carefully—not only in their home jurisdiction but in all markets where they operate or where their customers are located. Best practice can be taken from global sources, given the universal challenges that AI presents.