CDR is designed to minimize the negative and maximize the positive impacts digitalization and digital tools can have on people and the environment. CDR is genuinely a transnational issue. It is regulated at global, European and national levels.
Below is a round-up of selected international soft law standards and European Union and Chinese laws concerning digitalization and digital tools. It is not legal advice nor a comprehensive list of regulatory and soft law norms that may apply to digitalization and digital tools. Please contact us for specific legal advice or assistance in connection with any of the below.
1. CDR at global level
At global level international soft law frameworks provide guidance on what is considered CDR. They can inform a business’ CDR strategy and risk management system.
The following international soft law frameworks place general responsibility for protecting and respecting Human Rights and the environment directly on the private sector including impacts by digitalization and the use of digital tools:
- the UN Guiding Principles on Business and Human Rights (link);
- the UN Global Compact (link).
- the OECD Guidelines for Multi-national Enterprises (link);
- OECD AI Principles (link),a guidance on how to shape a human-centric approach to trustworthy AI.
In addition, voluntary standards and soft law norms developed specifically for digitalization and digital tools, either through multi-stakeholder initiatives or by companies themselves, can provide guidance. Examples include:
- the Global Network Initiative Principles (link) a multi-stakeholder initiative established by leading Information and Communication Technology (ICT) companies and universities, provide a framework and Implementation Guidelines;
- The Christchurch Call (link) signed by over one hundred States, tech companies and civil society organisations provide a set of principles to tackle terrorism and violent extremism online and to protect, enforce and respect human rights; and
- the Microsoft AI Principles (link) and Google AI Principles (link) as examples of self-regulating standards of larger digital technology companies with leverage to apply them throughout their business relationships with governments, other companies and users.
- the Netherlands Fundamental Rights and Algorithm Impact Assessment a multistakeholder process for public sector at an early phase in the policy and procurement process to discuss and identify potential human rights impacts;
- Danish Institute of Human Rights’ Guidance for businesses on Human rights impact assessment of digital activities – link.
2. CDR in the EU
The EU is at the forefront of a human-centred and value-based approach to digitalization. This is reflected in its strategic policies (see 2030 Digital Compass and a draft Proposal for a Decision on the Path to the Digital Decade (link) of 15 September 2021). The draft sets the general main objectives of the EU Policy Programme, to build a human-centred, inclusive, ethical, secure, open, transparent and interoperable digital environment.
In the EU, CDR will be shaped by the current and proposed Data Privacy, Data Economy, Artificial Intelligence (AI) and Cybersecurity related laws and regulations. In practice this is a mix of EU Regulations and Directives, national Member State laws and voluntary and soft law standards.
Some of the most relevant instruments and reference frameworks are:
- In terms of Data, the EU strategy for Data (February 2020 – link) led to several proposed laws and codes (which will apply in addition to the data privacy and processing obligations imposed on companies by the GDPR Regulation ((EU) 2016/679 (General Protection Data Regulation – link), namely.
- Data related proposals aimed at creating a genuine single market for data:
- The draft Proposal for a Regulation on Harmonised Rules on Fair Access to and Use of Data (Data Act) adopted by the Commission on 21 February 2022 (link) aims to stimulate a competitive data market, present opportunities for data-driven innovation and make data more accessible for all while ensuring fairness.
- Data related proposals aimed at creating a genuine single market for data:
We wrote about the relevance of the Data Act to regulation of smart devices here.
- The Regulation on European Data Governance (Data Governance Act) (link), which entered into force on 23 June 2022 and will apply from 24 September 2023 provides a governance mechanism and a framework to facilitate general and sector-specific data-sharing by public sector bodies, a regime for data intermediation services and a legal framework for “data altruism”.
- Proposals for upgrading the rules governing digital services:
- Regulation (EU) 2022/2065 of 19 October 2022 on a Single Market For Digital Services (amending Directive 2000/31/EC) (Digital Services Act (link)) which entered into force on 16 November 2022 and will apply from 1 January 2024. The Act increases responsibilities of online platforms and information service providers, irrespective of their place of establishment, to keep users safe online and protect their fundamental rights.
- Proposals for upgrading the rules governing digital services:
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (amending Directives (EU) 2019/1937 and (EU) 2020/1828) (Digital Markets Act) (link) entered into force 1 November 2022 and will apply from 2 May 2023. The Act provides new rules for large online platforms that act as ‘gatekeepers’ to create a fairer environment for smaller businesses and prevent the gatekeepers from monopolizing the market.
Please find more information on the Digital Services Act and the Digital Markets Act here and here.
- Codes and guidelines on content moderation:
- Declaration on Digital Rights and Principles (link) (published on 26 January 2022, a political agreement was reached on 14 November 2022, now subject to approval by the Council, the European Parliament, and the Commission) is to serve as a reference framework for people and a guide for businesses and policy makers. The aim is for the Declaration to become a global benchmark for emerging societal and ethical questions related to digital transformation.
- Strengthened Code of Practice on Disinformation 2022 (link) of 16 June 2022 (which is an updated version produced in response to the Commission’s Guidance of May 2021) is to serve as co-regulatory instrument within the Digital Services Act legislative framework to fight disinformation and fake news. It was the first time worldwide that industry agreed, on a voluntary basis, to self-regulatory standards to fight disinformation. Facebook, Google, Twitter, Mozilla, Microsoft and Tik Tok are signatories.
- EU Commission Code of Conduct on Countering illegal hate speech online (link) is a public commitment by Facebook, Microsoft, Twitter and YouTube on a content moderation process to fight illegal hate speech. The code is a guidance for signatories’ own activities and a model for best practices within the industry.
- Council of Europe Guidance Note on Content Moderation(link), a recommendation to Member States to implement the guidance in cooperation with relevant stakeholders. It elaborates a human-rights based approach to content moderation protecting freedom of expression, privacy, freedom of assembly and right to remedy.
- In terms of AI, the recently published EU AI Legislation package includes the following:
- a Proposal for a Regulation laying down harmonized rules on AI (link) (the “proposed AI Act”) of 21 April 2021, with the objective to create a single market to promote the development and uptake of AI, while addressing risks to safety, health and fundamental rights;
- a Proposal for a Directive on adapting non-contractual civil liability rules to AI (link) (the ‘proposed AI Liability Directive’) and a Proposal for a Directive on liability for defective products (link) both of 28 September 2022, ensuring victims of harm caused by AI systems can access reparation by reducing their burden of proof through the ‘presumption of causality’, and the right to access to evidence.
We wrote about how companies may wish to adapt to the proposed AI regulatory environment here.
In addition, on 18 August 2022, the Council of the EU authorized the Commission to negotiate a Council of Europe (framework) Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The purpose is to ensure protection of the integrity of EU law and that the rules of international law and EU law remain consistent, also with regard to the proposed AI Act.
- In terms of Cybersecurity, the EU Cybersecurity Strategy of 16 December 2022 (a successor to the February 2013 strategy), led to the following two proposals.
- A Proposal for a Directive on measures for a high common level of cybersecurity across the Union of 16 December 2020 (draft NIS 2) (link), repealing the existing Directive on Security of Network and Information Systems across the EU (EU) 2016/1148 of 6 July 2016 (NIS 1)). Draft NIS 2 expands the scope to cover medium and large entities from more sectors and services, strengthens security requirements imposed on these companies, security of supply chains and supplier relationships, streamlines reporting obligations, introduces more stringent supervisory measures and stricter enforcement and harmonises sanctions regimes across Member States.
- A proposal for a Regulation known as the EU Cyber Resilience Act (CRA) (link) of 15 September 2022, which deals with cybersecurity requirements for products with digital elements (i.e. of broad range of devices, including hardware, software and applications) imposing obligations on manufacturers, importers, and distributors of these products making sure they remain responsible for cybersecurity throughout a product’s life cycle. CRA complements the NIS 2 proposal.
- The current Cybersecurity Act (link), a Regulation (EU) 2019/881 of 17 April 2019 (repealing Regulation (EU) No 526/2013) in force since 17 June 2019, created the first EU cybersecurity certification framework for ICT products, ICT services and ICT processes and gave a more prominent role to the EU cybersecurity agency (ENISA). ENISA is charged with setting up European cybersecurity certification schemes in consultation with standardisation organisations (such as for example ISO/IEC 27000 series). Currently three certification schemes are being developed; one for ICT products, one for cloud services and one for 5G networks with more coming.
- ENISA’s website (link) includes good practice and self-assessment guides for businesses dealing with ICT products, services and processes or developing the same.
3. CDR in China
China has recently undergone a tightening of digital regulations:
- Cybersecurity Law of the People’s Republic of China (effective on June 1, 2017) aims to establish a uniform regulatory regime for cybersecurity and data protection in China. Multiple government agencies are involved in implementing the law among others, Cyberspace Administration of China (CAC). The law imposes baseline data protection and cybersecurity obligations on network operators, including compliance obligations with Multi-Level Protection Scheme (MLPS) rules, a regulatory framework for critical information infrastructure (CII) operators, a cybersecurity review mechanism for network products and services that could put China’s national security at risk, pre-sale certification requirements for critical network equipment and network security products and requirements to protect data collected in the operations of networks. The law provides for a wide array of sanctions and penalties for non-compliant companies.
- Personal Information Protection Law of the People’s Republic of China (effective on November 1, 2021) imposes strict requirements for processing personal data and restrictions on transfer of the same out of China. Transfer of such data is permitted only following certain requisite steps being taken and regulatory approvals being obtained (such as clearing a security assessment by CAC, obtaining personal information protection certification from an institution designated by CAC or entering into a standard format data transfer agreement with the overseas recipient of such data). We wrote more about this here.
- Data Security Law of the People’s Republic of China (effective on September 1, 2021) provides that data stored within China must not be provided to foreign legal or enforcement authorities, unless approved otherwise by the competent Chinese authorities. The restriction applies to all data and is not limited to the types of data already subject to transfer restriction under other laws.
- Internet Information Service Algorithmic Recommendation Management Provisions (effective on March 1, 2022) apply to any entity that uses algorithmic recommendation technology to provide Internet information services within China. The Provisions lay down the technical and organizational standards for the fair use of algorithmic recommendation technologies and introduces a new set of user rights. We wrote more about this here.
Although AI is highly developed in China, there is no overarching AI law as yet.
Conclusion
CDR is a highly complex issue not only with regard to the technical aspects but also concerning the regulatory landscape. At the global level soft law and guidelines set the frame which is further elaborated by EU and national legislation including detailed provisions and harsh sanctions. The comparison with the Chinese regulation shows the challenges to CDR compliance in transnational situations.
Please complete Dentons’ second global survey on the use of artificial intelligence. You can find the survey link HERE.