Disruption is coming to the manufacturing sector. This disruption will take many forms, such as the enhancement of equipment with internet-connected sensors so that it becomes part of the Internet of Things (IoT) or the increased deployment of robotics and autonomous systems, and will involve collecting and using vast quantities of data.
Disruption in the manufacturing sector will involve artificial intelligence. Underpinning all of these complex systems is the use of artificial intelligence (AI), meaning the use of sophisticated computer models, often leveraging machine learning (ML) techniques to harness high volumes of data feedback (for example, machine to machine (M2M) data produced by sensors) to continually improve their outputs. Industry leaders expect AI to produce gains in efficiency, productivity and safety in industry.
Disruption presents an opportunity to the manufacturing sector. A recent Accenture report estimated that AI would add around US$3.7 trillion to the global manufacturing sector by 2035.
The business case for AI in manufacturing
There has been a rapid boom in AI development and deployment in the last decade. Machines already complete 29% of tasks today and, according to a report from the World Economic Forum, may complete 71% of tasks by 2025. Similarly, future investment in AI by government has also significantly increased. The EU has created a Coordinated Plan for AI, which aims to increase funding to €20 billion per year across the Union.
These developments are already colouring the expectations of business leaders in the manufacturing sector. In industrial equipment companies, 71% of executives believe that AI will have a significant impact on their organisation and 78% believe that AI will have a significant impact on the industry as a whole.
Those manufacturers who have been early adopters are seeing an average of 17-20% productivity gains from “smart factories” which have been connected by sensors. A study by McKinsey, focusing on manufacturing use cases for the German market, saw potential reductions in annual maintenance costs if a factory became a “smart factory” of up to 10%, including up to 20% downtime reduction and 25% reduction in inspection costs. This “smart factory” would contain interconnected sound/vibration detection sensors. AI would provide predictive maintenance – in other words, suggested maintenance needs and schedules, to be provided to the workers carrying out the maintenance.
An analysis of the manufacturing sector found that a diverse range of industrial tasks (e.g. beverage packaging, industrial welding, car plants) all had the potential to tap into valuable data produced by machines and deploy AI to unlock the potential of that data for their business. In addition, manufacturers were sensitive to their role in larger supply chains and the ability to use AI to manage those supply chains. Every manufacturer engaged with reported that AI was boosting efficiency, productivity, safety and health.
Put simply, the use of AI in manufacturing can reduce maintenance costs, supply markets with better and more cost-effective goods and improve the conditions and quality of human labour.
Potential use cases for AI in the manufacturing sector
There are a number of potential deployments of AI in the manufacturing sector. We have mentioned the role of predictive maintenance of “smart factories” above, but should not overlook, for example: the potential use of autonomous vehicles for transportation and haulage; collaborative and context-aware robots (co-bots); automated quality testing systems; supply chain management; and business support function automation.
A co-bot can be, for example, a robot which is “taught” by a human instructor to move in a certain way (for example, to pass equipment or tools to a worker). Once the co-bot has finished learning the move, it can then continue to perform it independently, adjusting to its environment via the use of sensors. Co-bots can increase productivity up to 20%, particularly in labour-intensive settings.
Automated quality testing can leverage AI. Using a camera and advanced image recognition techniques, these AI systems can inspect products on the production line and carry out automatic fault detection. Here, productivity increases of up to 50% are possible. AI-based visual inspection based on image recognition may increase defect detection rates by up to 90%, as compared to human inspection by workers.
Finally, AI-based approaches to forecasting could be used in the context of supply chain management. AI tools can connect relevant internal and external data for more accurate demand forecasting. Better forecasting will enable lower inventory levels across the supply chain and can adjust the flow of materials based on real-time data (such as the weather). It is predicted that this could result in up to 50% fewer forecasting errors – and, more strikingly, up to a 65% reduction in lost sales due to product unavailability.
These are all examples cited by a recent McKinsey study, which offers the above predictions on the actual efficiency gains of the use of the technology.
What is the relevant legal framework for implementing AI?
Before deploying an AI system, a business will have to consider how to navigate the current legal and regulatory frameworks which govern their development and deployment.
The use of AI may raise a number of legitimate concerns. These may include data privacy risks, where the data processed by the systems includes personal data of employees or suppliers, such as facial recognition and biometric systems for monitoring and security purposes. In the EU, the General Data Protection Regulation (GDPR) applies, alongside local privacy laws in each jurisdiction, such as the Data Protection Act 2018 in the UK.
Alternatively, these may include the inherent risks of developing and deploying an AI system. There is no specific EU-wide legislation that governs AI. However, the High Level Expert Group (HLEG) on Artificial Intelligence, set up by the European Commission, published its “Ethics Guidelines for Trustworthy AI” in April 2019. In accordance with these guidelines, AI systems should be lawful, ethical and robust and should meet seven key requirements in order to be deemed “trustworthy”. The guidance is not binding and enforceable, but it may be taken into account by other bodies (e.g. privacy regulators).
In addition, work has been carried out on developing mechanisms to implement the “Ethics Guidelines for Trustworthy AI”. A recent Working Paper published by AI industry experts and academics (including the University of Oxford, Toronto and UC Berkeley) provides recommendations on how to improve the auditing of claims about products developed by the AI industry.
Finally, in 2020, the Information Commissioner’s Office in the UK published guidance for organisations looking into implementing AI systems, including guidance for “Explaining decisions made with AI” (jointly with The Alan Turing Institute) and a framework for auditing AI. These guidelines may be taken into account by the ICO in carrying out enforcement action where personal data is involved, such as imposing fines under the GDPR.
Risks in implementing AI
The EU “Ethics Guidelines for Trustworthy AI” contain a handy Assessment List in Chapter III, which flags many of the risks inherent in implementing AI systems.
AI in the context of manufacturing may process personal data (for example, in the measuring of worker efficiency and the calculation of bonuses, or the use of facial and other biometric recognition tools for access into secure facilities). Where AI is processing personal data, there are a number of additional challenges around privacy and data governance.
In addition, there may be further challenges regarding the fairness and reliability of the algorithm. For example, for AI which monitors, assesses and calculates worker efficiency, it would be hoped that there was a procedure to ensure that the AI avoided either creating or reinforcing unfair bias in the design of the system (for example, whether the algorithm was designed with the dataset in mind and whether processes were in place to test for potential bias). Furthermore, with facial recognition technologies, it would be necessary to ensure that the dataset for training the technology had a sufficiently broad range of different demographics, so that it would correctly identify people of different racial and ethnic origins, rather than one particular class of worker more reliably. Similarly, there should be a mechanism deployed by the organisation in place to ensure that any potential unfairness can be flagged, including bias, discrimination or poor performance of the system.
In the example of the facial recognition tool, a business may also have to contend with specific multi-jurisdictional laws on employment monitoring. If consent is required (because automated decision-making is involved to determine whether or not to allow someone into a secure facility, with no human involvement, for example), there may be complications regarding the availability of employee consent, especially if no alternative to being scanned by the facial recognition tool is offered.
AI in the context of manufacturing will often be processing M2M data rather than personal data. Whilst this makes the AI challenges simpler as we do not need to consider data privacy risks and the GDPR, it does not eliminate them entirely.
For AI which has a role in operating, checking and improving functioning and efficiencies in industrial machinery, technical robustness and safety is the major challenge. In particular, when developing or procuring an AI system which can be deployed, businesses will be particularly interested in whether the AI system has been assessed to withstand potential attacks (along with unexpected functioning in new environments) as well as whether there are fallback plans and similar general safety mechanisms in place. An autonomous system which is monitoring the condition of industrial machinery and recommending when urgent repairs are required should have a safety mechanism to ensure that it can “switch off” the AI if there are numerous anomalies in the data it is receiving (such as if the machine has been subjected to a cyberattack). An assessment of the potential harms that may arise from the system may also intersect with a risk analysis regarding product liability and health and safety rules, depending on the magnitude of harm that may arise from the malfunctioning of the AI system.
It goes without saying that pertinent questions should be asked by the business deploying the AI system concerning the levels of accuracy of data required in the context of the AI system (how it is measured and what harm would be caused if the AI system made inaccurate predictions), and whether there is a strategy in place to monitor and test if the AI system’s outputs were reliable and reproducible.
An important point in the continuing deployment of Internet of Things (IoT) and other “smart” technologies is whether the technology, for example, a piece of industrial equipment, has been automatically equipped with a connection to the open internet. This will raise the risk profile of the system. If so, general cybersecurity best practice will also become relevant (e.g. whether the technology has the capability of being equipped with firewalls / antivirus software, password hygiene, availability of security updating / patching).
The solutions for implementing AI
Whilst there are risks inherent in deploying novel technologies, such as AI, the advantages will mean that businesses in the manufacturing sector will want to understand how to solve those risks so that they can reap the benefits of being able to better use and understand their data.
Where personal data is involved in the system, organisations will also need to comply with the overarching accountability principle under GDPR, which means good data governance and implementing privacy by design and by default when using equipment to process personal data or run AI algorithms (Article 5(2) GDPR).
How do organisations overcome the privacy and AI hurdles?
We have set out a few key considerations below:
- Data Protection Impact Assessments (DPIAs): The use of novel technologies and the processing of integrated data sets using AI may trigger the requirement to conduct a DPIA (Article 35 GDPR). The ICO has identified “combining, comparing or matching data from multiple sources” as a factor necessitating a DPIA. Depending on the processing in question, the use of data or AI could also lead to profiling data subjects on a large scale (another automatic trigger for a DPIA). Using innovative technology or processing biometric or genetic data when coupled with another trigger from the European guidelines on DPIAs (e.g. systematic monitoring) also results in an organisation having to carry out a mandatory DPIA.
- Enhanced transparency: In order to address the requirement in GDPR for lawful, fair and transparent processing and transparency concerns in the “Ethics Guidelines for Trustworthy AI”, businesses seeking to use AI should consider reviewing their current privacy notices in favour of reaching an “enhanced” transparency standard. This might involve amending your organisation’s privacy notice to include information about the purpose of processing personal data using the AI system. You will also need to identify a lawful basis for processing. If using legitimate interests as your lawful basis, you could consider including a link to your organisation’s legitimate interests assessment.
- Internal policies: One of the ways an organisation can demonstrate accountability with the GDPR is through adopting and implementing internal policies. The ICO’s guidance on “Explaining decisions made with AI” emphasises the need for policies that set out rules and responsibilities concerning the explanation of AI-enabled decisions to individuals.
- Privacy by design and by default: Embedding privacy by design and default in the deployment of the AI should help to ensure that the business is moving towards good data governance. The implementation of techniques such as:
- data minimisation measures, to ensure that only data which is strictly necessary for the purposes is being collected, processed and retained by the system;
- purpose limitation measures, such as segregating datasets to ensure that they are used for the purpose they were collected for; and
- security measures, such as the anonymisation or pseudonymisation of data where possible and the implementation of access controls, audit logs and encryption.
- Solely automated decision-making: Finally, if the AI processes personal data and is deployed for use in solely automated decision-making (including profiling) with no meaningful human involvement in the decision-making process, and this results in a legal or “similarly significant effect” on the individual, this will have consequences under Article 22 GDPR. Your organisation will have to ensure that it has an appropriate legal basis to carry out the solely automated decision-making (usually, it will involve the data subject’s explicit consent) and that there are suitable safeguards, in particular a right of appeal against the decision to a human decision-maker.
For AI systems, we would recommend an enhanced DPIA which combines the standard criteria for assessment set out in Article 35 GDPR (and the consideration of the Data Protection Principles in Article 5 GDPR) with assessment criteria based on the “Ethics Guidelines for Trustworthy AI” that assess the particular characteristics of AI systems (e.g. transparency, robustness, bias reduction, accountability).
In summary, whilst the many potential use cases of AI in manufacturing will, we anticipate, lead to rapid evolution of the sector and present great opportunities for businesses, organisations will need to take account of the unique implications of this new technology and navigate the data privacy and AI risks with good governance measures. We are familiar with implementing these measures for our clients and hope to have left you with some useful “food for thought” for your own AI implementation strategy.
- Microsoft: The Future Computed – AI & Manufacturing – Microsoft, 2019 (available at link).
- Manufacturing the Future: Artificial intelligence will fuel the next wave of growth for industrial equipment companies – Accenture, 2018 (https://www.accenture.com/_acnmedia/pdf-74/accenture-pov-manufacturing-digital-final.pdf).
- Future of Jobs Report – World Economic Forum, 2018 (http://reports.weforum.org/future-of-jobs-2018/)
- See Footnote 2.
- IDC FutureScape: Worldwide Operations Technology 2017 Predictions Jan 2017 Doc # US42261017 Web Conference By: Lorenzo Veronesi, Marc Van Herreweghe, reference Footnote 1.
- Smartening up with Artificial Intelligence (AI) – What’s in it for Germany and its Industrial Sector? McKinsey, 2017 (available at link).
- See Footnote 1.
- See Footnote 6.
- The Ethics Guidelines for Trustworthy AI are accessible here: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, accessible here: https://arxiv.org/pdf/2004.07213.pdf
- Explaining decisions made with AI, published May 2020 and accessible here: https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-artificial-intelligence/
- AI Auditing Framework (at the time of publishing this article, a draft for consultation, published February 2020 and accessible here): https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf