The continued growth of urbanisation presents new challenges. According to the United Nation’s Department of Economic and Social Affairs, 55% of the world’s population reside in urban areas. This is expected to rise to 68% by 2050.
The rapid increase will occur as a result of rising populations in major cities, coupled with expansions of regional cities. This, in turn, will lead to pressure for: (1) sustainable environment initiatives, with demands for more and better infrastructure in the diminishing space available; and (2) improved quality of life for city dwellers at a more affordable cost.
Smart Cities are part of the solution to the growing challenges of urbanisation. A study by McKinsey found that “Smart City” technology can lead to improvements in certain key quality of life indicators by 10-30%, including reducing crime, lowering health burdens, shortening commutes and lowering carbon emissions.
What are Smart Cities?
A “Smart City” is an urban area which relies on information and communication technologies to build economic growth, improve quality of life and underpin governance structures. For example, a municipal authority could interconnect its transport and energy grid systems, build sensor-equipped energy-efficient buildings and develop communications that enable better monitoring of and access to healthcare, emergency and other public services.
The McKinsey study suggested that there are three layers that intertwine to make a Smart City function. Firstly, the technological base consists of smartphones and sensor-equipped devices producing data and connecting to high-speed communication networks. Secondly, computers process the data to deliver workable solutions for specific problems. Thirdly, the general public interacts with these technologies – and all of the applications of Smart City technologies depend on individuals simultaneously using them and providing data to generate predictions.
How can Artificial Intelligence facilitate the development of Smart Cities?
To function, Smart City technologies require the processing of enormous quantities of data, or “Big Data”. Big Data has been described in terms of the three “Vs” as “high-volume, high-velocity and/or high-variety information assets” which means massive datasets, processed very quickly (through the use of algorithms) and the use of different data sources, including combining different datasets.
Big Data and artificial intelligence (AI) are interlinked. AI refers to various methods “for using a non-human system to learn from experience and imitate human intelligent behaviour”. AI can efficiently sift through large quantities of Big Data to generate data predictions and cost-effective solutions to fuel Smart City technologies.
The way this works depends on whether the AI is supervised or unsupervised. In supervised learning, datasets and target values are created to train AI networks to find specific solutions in the collected raw data. The AI will then carry out programmed tasks and actions, whilst exploring new opportunities and possibilities that may provide better outcomes than current solutions. In unsupervised learning, non-labelled and non-classified datasets are used to train and ask questions of AI networks, which will then find latent characteristics and hidden patterns in the data.
Potential use cases of AI in Smart Cities
Public transit. Cities with vast transit infrastructure and systems can benefit from applications that harmonise the experience of its riders. Passengers of trains, buses and cars can provide real-time information through their mobile apps to communicate delays, breakdowns and less congested routes. This may, in turn, encourage other commuters to alter their choice of travel routes, and free up future congestions. Collecting and analysing public transit usage data can also help cities make more informed decisions when modifying public transport routes and timings, and allocate more accurate infrastructure budgets. For example, Dubai has completed a number of Smart City projects, one of which monitored the condition of bus drivers. This monitoring contributed to a 65% reduction in accidents caused by exhaustion and fatigue.
Public safety. The same networks of sensors and cameras can be used to save lives and lower crime. Traffic lights and congestion data can be used by emergency services to get to their destinations quicker and more safely. Cities can gather data on accidents or choose other factors to measure in order to develop predictive and preventative measures for the future.
Building automation systems. Sensors can be placed in strategic building locations that will help to gather information on energy usage and predict consumer behaviour. For example, store owners and retailers can use sensors to track the peak times that individuals enter and use the stores, as well as towards which areas the public gravitates. Through the use of AI, the data generated can help to produce consistent predictions and track daily, weekly and seasonal differences.
Power grids. AI and Smart Cities have the potential to enhance the safety of power grids and improve performance management. Smart grids (power networks, such as generation plants, that are embedded with computer technology) can make smart meter readings of large quantities of data to assess and predict demand response and load clustering. Prediction models can be set up on these grids to forecast the price and demand for energy for specific periodic intervals. Research conducted has found that these models can surpass existing methods in terms of accuracy of price and load forecasting.
What is the relevant legal framework for implementing AI?
Vendors developing future Smart City technologies leveraging AI systems (and national and local governmental organisations procuring those technologies for their cities) will have to consider how to navigate the current legal and regulatory frameworks which govern the development and deployment of AI systems.
The European Union recognises the strategic importance of developing the AI industry. In February 2020, it released its white paper on “Artificial Intelligence – A European Approach to Excellence and Trust”exploring both the opportunities presented by AI and the possible requirement for a future regulatory framework.
Recognising the potential for AI and for public-private partnerships, in 2019 the UK government became the first government to test a new set of AI procurement guidelines which were developed by the World Economic Forum. Although the UK is the first, it will become increasingly necessary for all governments to have robust frameworks in place in order to ensure that the products they are procuring are beneficial for their citizens.
The use of AI may raise a number of legitimate concerns. These may include data privacy risks, where the data processed by the systems includes personal data of employees or suppliers, such as facial recognition and biometric systems for monitoring and security purposes. In the EU, the General Data Protection Regulation (GDPR) applies, alongside local privacy laws in each jurisdiction, such as the Data Protection Act 2018 in the UK.
Alternatively, these may include the inherent risks of developing and deploying an AI system. There is no specific EU-wide legislation that governs AI. However, the High Level Expert Group (HLEG) on Artificial Intelligence, set up by the European Commission, published its “Ethics Guidelines for Trustworthy AI” in April 2019. In accordance with these guidelines, AI systems should be lawful, ethical and robust and should meet seven key requirements in order to be deemed “trustworthy”. The guidance is not binding and enforceable, but it may be taken into account by other bodies (e.g. privacy regulators).
In addition, work has been carried out on developing mechanisms to implement the “Ethics Guidelines for Trustworthy AI”. A recent Working Paper published by AI industry experts and academics (including the University of Oxford, Toronto and UC Berkeley) provides recommendations on how to improve the auditing of claims about products developed by the AI industry.
Finally, in 2020, the Information Commissioner’s Office (ICO) in the UK published guidance for organisations looking into implementing AI systems, including guidance for “Explaining decisions made with AI” (jointly with The Alan Turing Institute) and a framework for auditing AI. These guidelines may be taken into account by the ICO in carrying out enforcement action where personal data is involved, such as imposing fines under the GDPR.
Risks in implementing AI
The EU “Ethics Guidelines for Trustworthy AI” contain a handy Assessment List in Chapter III, which flags many of the risks inherent in implementing AI systems.
AI in the context of Smart Cities may process personal data (e.g. in delivering and monitoring the use of power in an individual’s home, or monitoring the movements of and serving relevant adverts based on geo-location to potential consumers moving around the urban landscape). It may also include the use of facial recognition to track and monitor people moving around public spaces, for both safety and personalisation reasons. Where AI is processing personal data, there are a number of additional challenges around privacy and data governance.
In addition, there may be further challenges regarding the fairness and reliability of the algorithm. For example, with facial recognition technologies deployed for policing and public safety, it would be hoped that the dataset for training the technology had a sufficiently broad range of different demographics represented within it, so that it would correctly identify people of different racial and ethnic origins, rather than one particular ethnic group more reliably. Purchasers of these technologies should be asking how the developers took steps to ensure that the AI avoided either creating or reinforcing unfair bias in the design of the system (for example, whether the algorithm was designed with the dataset it would typically be processing (such as the citizens of a diverse metropolis) in mind and whether processes were in place to test for potential bias). On deployment, there should be governance mechanisms developed to ensure that any potential unfairness can be flagged by citizens, including bias, discrimination or poor performance of the system.
Meeting transparency requirements is a major challenge in Smart Cities. In particular, it is necessary to communicate effectively to citizens moving around a Smart City when they are interacting with AI systems. The transparency requirements of Article 13 and 14 GDPR can be onerous and are not necessarily practical in an urban environment – not even with very large signs! Therefore, it will be advisable to develop signage that may include the use of commonly recognised signs and symbols, along with interactive signs and QR codes which can allow the individuals to access fuller information (i.e. a layered approach to fuller privacy information available on the internet).
Finally, there is the challenge of establishing appropriate human oversight mechanisms. For those involved in the procurement of AI systems, there needs to be consideration of the appropriate level of human control for the particular “Smart City” infrastructure. There are a number of different models that could be considered, but the challenge is the volume and velocity of data moving through these Big Data systems and where meaningful human supervision can realistically be introduced. In any event, there will need to be a mechanism in place to facilitate the system’s auditability.
An important point in the continuing deployment of sensor-equipped Internet of Things (IoT) and other “Smart City” technologies is whether the technology, such as a smart meter or smart traffic light, has been automatically equipped with a connection to the open internet. This will raise the risk profile of the system. If so, general cybersecurity best practice will also become relevant (e.g. whether the technology has the capability of being equipped with firewalls/antivirus software, password hygiene, availability of security updating/patching).
The solutions for implementing AI
Whilst there are risks inherent in deploying novel technologies, such as AI, the advantages will mean that both developers and purchasers of “Smart City” technologies will want to understand how to solve those risks so that they can reap the benefits of being able to better use and understand their data.
Where personal data is involved in the system, organisations such as national and local governments and other buyers of systems who are deemed to be the “controllers” of data will also need to comply with the overarching accountability principle under GDPR, which means good data governance and implementing privacy by design and by default when using equipment to process personal data or run AI algorithms (Article 5(2) GDPR).
How do organisations overcome the privacy and AI hurdles?
We have set out a few key considerations below:
- Data Protection Impact Assessments (DPIAs): The use of novel technologies and the processing of integrated data sets using AI may trigger the requirement to conduct a DPIA (Article 35 GDPR). The UK ICO has identified “combining, comparing or matching data from multiple sources” as a factor necessitating a DPIA. Depending on the processing in question, the use of data or AI could also lead to profiling data subjects on a large scale (another automatic trigger for a DPIA).
- Enhanced transparency: In order to address the requirement in GDPR for lawful, fair and transparent processing and transparency concerns in the “Ethics Guidelines for Trustworthy AI”, organisations seeking to use AI in Smart City technologies should consider their transparency obligations carefully. This may involve developing a “layered” approach, such as signs and symbols around the urban landscape. It may also involve reviewing their current privacy notices in favour of reaching an “enhanced” transparency standard. You will also need to identify a lawful basis for processing.
- Internal policies: One of the ways an organisation can demonstrate accountability with the GDPR is through adopting and implementing internal policies. The ICO’s guidance on “Explaining decisions made with AI” emphasises the need for policies that set out rules and responsibilities concerning the explanation of AI-enabled decisions to individuals.
Using innovative technology or processing biometric or genetic data when coupled with another trigger from the European guidelines on DPIAs (e.g. systematic monitoring) also results in an organisation having to carry out a mandatory DPIA.
For AI systems, we would recommend an enhanced DPIA which combines the standard criteria for assessment set out in Article 35 GDPR (and the consideration of the Data Protection Principles in Article 5 GDPR) with assessment criteria based on the “Ethics Guidelines for Trustworthy AI” that assess the particular characteristics of AI systems (e.g. transparency, robustness, bias reduction, accountability).
For large-scale city projects, it is also recommended that a Fundamental Rights Impact Assessment and any Equality Impact Assessment involving the participation of relevant stakeholders (such as members of the general public who would be affected by the technology) are also carried out in advance.
If using public interest as your lawful basis, you should consider documenting this internally and making any DPIA (see above) available for public access.
- Privacy by design and by default: Embedding privacy by design and default in the deployment of the AI should help to ensure that the organisation is moving towards good data governance. The implementation of techniques such as:
- data minimisation measures, to ensure that only data which is strictly necessary for the purposes is being collected, processed and retained by the system;
- purpose limitation measures, such as segregating datasets to ensure that they are used for the purpose they were collected for; and
- security measures, such as the anonymisation or pseudonymisation of data where possible and the implementation of access controls, audit logs and encryption.
- Solely automated decision-making: Finally, if the AI processes personal data and is deployed for use in solely automated decision-making (including profiling) with no meaningful human involvement in the decision-making process, and this results in a legal or “similarly significant effect” on the individual (e.g. the prioritisation of emergency services calls in a city based on data relating to the citizens making emergency calls), this will have consequences under Article 22 GDPR. Your organisation will have to ensure that it has an appropriate legal basis to carry out the solely automated decision-making (usually, it will involve the data subject’s explicit consent) and that there are suitable safeguards, in particular a right of appeal against the decision to a human decision-maker.
In summary, while the use cases of Smart City technologies promise to revolutionise the way we live in our urban areas, both organisations in the public sector procuring these systems and in the private sector developing them will need to take account of the unique implications of this new technology and navigate the data privacy and AI risks with good governance measures. We are familiar with implementing these measures for our clients and hope to have left you with some useful “food for thought” for your own AI implementation strategy.
- Pg 6, Information Commissioner’s Office Draft Guidance on the AI Auditing Framework https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-au
- This is from the McKinsey paper coupled with some general web research and footnote from No.5; but phrased in my own language.
- The Ethics Guidelines for Trustworthy AI are accessible here: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, accessible here: https://arxiv.org/pdf/2004.07213.pdf
- Explaining decisions made with AI, published May 2020 and accessible here: https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-artificial-intelligence/
- AI Auditing Framework (at the time of publishing this article, a draft for consultation, published February 2020 and accessible here): https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf