Skip to content

Brought to you by

Dentons logo

Business Going Digital

Helping companies in the digital transformation of their business.

open menu close menu

Business Going Digital

  • Home
  • Regions
    • Asia Pacific
    • EMEA
    • Latin America and the Caribbean
    • North America
  • Sectors
    • Automotive
    • Energy
    • Financial Institutions
    • Government
    • Infrastructure
    • Manufacturing
    • Real Estate
    • Retail
  • Podcasts/Videos
    • Podcasts/Videos
    • The future of European AI regulation: Q&A with Brando Benifei
    • Artificial intelligence: EU Commission’s proposal
    • The EU VAT e-commerce package
    • Meeting the challenge of digitalization
  • Interactive tools
    • Interactive tools
    • Digital Signatures Tracker
    • Europe Cookie Law Comparison Tool
    • Global Autonomous Vehicles Index
    • Global FinTech Comparison Tool

How can businesses protect themselves from deepfake attacks?

By Nick Graham, Calvin Chiu, Frédérique de La Chapelle, and Anouschka van de Graaf
October 2021
  • Asia Pacific
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Infrastructure
  • Manufacturing
  • North America
  • Real Estate
  • Retail
Share on Facebook Share on Twitter Share via email Share on LinkedIn

One of the fastest-evolving technologies today is that of deepfakes. The term “deepfakes” is taken from deep learning (a type of machine learning), and refers to synthesized or superimposed images, videos or voice recordings created by artificial intelligence (AI) using existing images, videos or voice recordings. The technology has seen a meteoric rise in use as the AI technology that underpins it continues to develop. In 2019, cybersecurity specialist Deeptrace estimated that the number of deepfake videos online had doubled to nearly 15,000. In 2020, Sentinel reported that deepfakes had grown by 900% to more than 145,000.

While deepfakes can be used in legitimate and productive ways (see examples below), the potential for fraud is massive and cannot be ignored.

In December 2019, the first federal legislation regarding deepfakes was signed into law in the US. Such an approach may one day be adopted by governments in other countries but, in the interim, there is little to discourage the spread and growth of deepfake technology.

In this article, we review the existing legal framework in the US, UK, European Union and China as well as propose some possible ways for businesses to protect themselves from potential reputational and commercial losses caused by deepfakes.

Current state of the technology

Enabled by machine learning, this technology can produce convincing footage of a person performing an act that has never actually occurred in real life. In practice, deepfakes take the following most common forms:

  • Face re-enactment, where advanced software is used to manipulate the features of a real person’s face.
  • Face generation, where advanced software is used to create an entirely new face using data from images of many real faces. The result is an image that does not reflect a real person.
  • Speech synthesis, where advanced software is used to recreate a person’s voice.

The technology attracted a lot of attention after 2018, when deepfake videos of various political figures (such as Donald Trump, Boris Johnson, and Jeremy Corbyn) began to be circulated on social media, and people saw for themselves how realistic the reconstructions could be.

Business opportunities with deepfakes

Deepfakes can present excellent business opportunities, especially in the media and entertainment sector. For example, deepfake technology can re-create famous people of the past and have them interact with the audience.  It can change the age of actors or manipulate their voice to sound younger or older in a movie.  It can also render video games more immersive, allowing players to insert themselves into the action[1]. In each case, however, the necessary copyright, personal data and contractual arrangements and notifications to users need to be in place.

Risk to businesses

As with individuals, companies are at risk of reputational damage at the hands of deepfakes. But the most obvious and potentially the most worrying risk to businesses from this technology is its potential to assist criminals in the commission of fraud. The ability to look and sound like anyone, including those authorized to approve payments from the company, gives fraudsters an opportunity to exploit weak internal procedures and extract potentially vast sums of money. The schemes would essentially be much more sophisticated versions of phishing, including Business Email Compromise scams, and are significantly harder to detect.

These sorts of attacks have already started to occur, with The Wall Street Journal reporting that a UK energy company’s chief executive was tricked into wiring €200,000 after a fraudster used AI to mimic the exact voice of his boss purporting to order the payment. Cybersecurity firm Symantec said that it has come across at least three cases of deepfake voice frauds in 2019, employing similar tactics, which have resulted in millions of dollars lost and, as this technology continues to become more sophisticated, we can expect more attacks of this nature. Given the rise of remote working following the COVID-19 pandemic, and the subsequent increase in business undertaken over the phone or via video conference, businesses have become even more vulnerable to such exploitation.

Moreover, new forms of deepfakes are popping up, for example, manipulated aerial imagery, as reported by Wired. Such images present a commercial risk given that they are used for digital mapping and guiding investments.

Legal framework

United States

In December 2019, Donald Trump signed into law the first federal legislation regarding deepfakes. In addition, three states have enacted laws to address deepfakes. California and New York have established the private right of action for those harmed by deepfakes, and Virginia has amended its penal laws to criminalize the sharing of deepfakes with the requisite intent and without consent.

State laws, such as the California Consumer Privacy Act, the Illinois Biometric Information Protection Act, and the New York SHIELD Act are intended to protect defined personal information of residents. However, the simple fact that deepfake content is falsified and artificially created means that victims of deepfakes may have difficulty claiming that there was a privacy violation.

Before a deepfake becomes the subject of a litigation, the creator of the deepfake needs to be identified, which is in most cases nearly impossible. In short, even though creating deepfakes can be prosecuted in the United States, the act of discovering them will likely be neither simple nor inexpensive. A more detailed analysis on how discovery of deepfakes can be conducted in a litigation is available here (see sections “Deepfakes in Litigation” and “Deepfakes and Discovery”).

United Kingdom

At present, there is no specific legal restriction on the production of deepfakes in the UK. The only restrictions on content that people can produce with this technology are those covered by anti-fraud legislation, and protections against harassment, defamation and copyright infringement, as well as data protection laws. This could introduce complications when looking for recourse after suffering losses as a result of a deception stemming from deepfakes.  It contrasts with the approaches taken in China and certain US states which have criminalized the use of deepfakes in certain circumstances.

European Union

While in the EU there is no specific regulation regarding deepfakes currently, its regulatory landscape relevant to deepfakes consists of a complex set of anticipated constitutional norms and hard and soft regulations on both the European and Member State level. These future rules and regulations offer some guidance for mitigating the negative impact of deepfakes.

On the European level, these include the regulatory framework proposal on artificial intelligence, the General Data Protection Regulation, the copyright framework, the eCommerce Directive, the proposed Digital Services Act, the Audiovisual Media Services Directive, the Code of Practice on Disinformation, the Action Plan Against Disinformation and the European Democracy Action Plan.

As an example regarding developments around the regulation of deepfakes on a Member State level, in November 2020, the Dutch House of representatives adopted a motion requesting the government to “develop as a matter of urgency a strategy to counter the production and distribution of unwanted deepnudes, looking at possible amendments to the Criminal Code, conditions to enforce proper compliance and also a possibly to set up product requirements, such as expressing that the deepfake is an edit”.

China

Although the word “deepfake” has not been mentioned in any law in China, the newly enacted Civil Code of China (Art. 1019) effectively bans the infringement of portrait rights “by vilifying, defacing, forging by means of information technology or otherwise”. Deepfake may also constitute an infringement upon the right to reputation of a person by such means as insult, another act forbidden under the Civil Code.

In addition, deepfakes are specifically brought up in the Implementation Outline Regarding the Construction of a Law-based Society (2020-2025), issued by the ruling party in China, which requires a strengthening of regulations on technology such as “algorithm recommendations, deepfakes and other applications of new technology.” The Public Security Bureau and Internet Network Information Center have also drawn attention to software and apps which produce deepfake content. In March 2021, local bureaus invited 11 well-known Internet companies in China to discuss compliance and urge them to implement a security assessment in their apps.

The development and use of the technology may also trigger the application of the newly enacted Personal Information Protection Law as deepfake algorithms often involve processing facial and vocal characteristics which may constitute personal data under the new law.

Mitigation steps for businesses

In light of this sophisticated threat, businesses would be well advised to take some simple and proactive steps to mitigate the risk of falling victim to deepfake-based scams. These include:

  • Introducing training to staff, particularly those working in roles that relate to the payment of money, explaining the threats posed by deepfakes and how they can be identified;
  • Tightening compliance procedures around the authorization of payments. For example:
    • Requiring all payments to be requested in writing from company email accounts and never over the phone;
    • Multiparty authorization for larger payments; and
    • Building a strong culture of “no-exceptions in any case” for following compliance procedures;
  • Considering investment in detection software that will screen communications for suspected uses of deepfake technology; and
  • Checking that insurance cover includes losses suffered from deepfake-based fraud.

Insurance protection

Losses suffered from deepfake-based fraud may be insured, at least partly, by various insurance policies. For example, losses triggered by a deepfake claim may be covered by cyber insurance which often covers the costs of crisis communication, computer forensic specialists and data restoration. Fraud insurance may also, in some circumstances, cover losses generated by a deepfake attack. Other policies may incidentally cover some losses, such as Directors and Officers policies in cases of shareholder derivative lawsuits resulting from deepfaked video reports.

In The Wall Street Journal case mentioned above, the CEO of a UK-based energy company was tricked into thinking he was speaking on the phone to his boss, the chief executive of the company’s German parent company, who requested a money transfer to a Hungarian supplier. The insurer, Euler Hermes, covered the full amount of the loss. In that case, the fraudster used AI to mimic the exact voice of the boss, and this AI fraud was fully indemnified by the insurance company.

Where a policy does not specifically cover a deepfake attack, the losses triggered by such an attack would only be covered if the conditions of the claim/losses—as more generally defined by the policy—are fulfilled. Loss of revenue and business interruption may be only partly covered or not at all. Thus, it is very important that businesses review their policies to assess potential exposure and whether they have appropriate coverage for these new deepfake threats, including determining if those attacks are at least partly covered by the insurance program in place. For instance, a cyber-policy might require a network penetration or a cyberattack, and the definition may not include the manipulation of an existing video. It is always better that this type of risk be mentioned as an insured risk and that the scope and extent of cover be clarified, notably as to whether the financial losses and loss of revenue are covered.  

***

If you have any queries about how we could help you to review your practices, and help you introduce any appropriate safeguards, please do not hesitate to reach out to the authors of this article or your regular Dentons contact.


[1] https://theconversation.com/deepfakes-five-ways-in-which-they-are-brilliant-business-opportunities-131591

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest articles by email.
Stay in Touch
Nick Graham

About Nick Graham

Nick Graham is the Global Co-Chair of Dentons' Privacy and Cybersecurity Group. He specialises in data privacy, cybersecurity, information governance, as well as freedom of information. Nick advises across all sectors including retail, telecoms, energy, manufacturing, banking, insurance, transport, technology and digital media.

All Articles Full bio

Calvin Chiu

About Calvin Chiu

Calvin is a counsel of Dentons Beijing office’s Corporate department. He advises clients in TMT sector on a wide array of issues: business strategy, corporate, contracts, investment, finance, regulatory aspects of internet-based activities, and cybersecurity and data compliance. His clientele reflects a representative section of TMT, including technology, Internet, e-commerce, software, smart device, digital media, and blockchain.

All Articles Full bio

Frédérique de La Chapelle

About Frédérique de La Chapelle

Frédérique de La Chapelle is a Partner in the Dentons Paris office and Head of Europe Insurance group. She is renowned in regulatory and dispute resolution matters in the insurance and reinsurance sectors.

All Articles Full bio

Anouschka van de Graaf

About Anouschka van de Graaf

Anouschka van de Graaf is a senior associate in our Amsterdam office. She specializes in intellectual property law and technology and advises clients on complex national and cross-border matters regarding IT, data protection and commercial contracts. She has four years of commercial working experience at a Fortune Global 500 company in the technology sector.

All Articles Full bio

Related Articles

  • Asia Pacific
  • Retail

Protecting IP in China with emerging digital technologies

By Business Going Digital Group
  • Automotive
  • EMEA
  • Energy
  • Financial Institutions
  • Government
  • Infrastructure
  • Manufacturing
  • Real Estate
  • Retail

The future of European AI regulation: Q&A with Brando Benifei, member of the European Parliament and co-Rapporteur on the AI regulation

By Giangiacomo Olivi
  • EMEA
  • Financial Institutions
  • Government

Digitalisation in Mauritius amidst the Industrial Revolution 4.0

By Priscilla Balgobin-Bhoyrul and Hemisha Hurnaum

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2023 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site