Web Analytics

WORLDWIDE DEEPFAKE AI LAWS

TLDR: Deep synthesis technology, or deepfake technology, that depicts artificial images and video comes with risks that should be mitigated through regulation. In this overview, we look at major regulatory deepfake approaches, focusing on Canada, China, EU, Korea, UK, and US.

You might have come across viral videos called “deepfakes,” which show the faces of politicians or celebrities superimposed on different bodies, making it seem like they are saying or doing something controversial. For instance, there was a video where CEO Mark Zuckerberg appeared to be bragging about owning users’ stolen data and another where Game of Thrones’s Jon Snow apologized for the disappointing end of the final season.

Deepfakes use AI to alter videos and images to look frighteningly real. But these videos are not genuine and can be used to spread misinformation with harmful consequences. AI firm Deeptrace identified 15,000 deepfake videos online in 2019, a figure which almost doubled in just nine months. Some experts anticipate that as much as 90 percent of digital content could be synthetically generated within just a few years.

 

The industry has responded by aiming to create technology that can accurately detect and label deepfakes. So what are countries doing to regulate the use of this technology? Here’s a look at the approaches of a few countries:

 

China

In 2019, the Chinese government introduced laws that mandate individuals and organizations to disclose when they have used deepfake technology in videos and other media. The regulations also prohibit the distribution of deepfakes without a clear disclaimer that the content has been artificially generated.

China also recently established provisions for deepfake providers, in effect as of 10 January 2023, through the Cyberspace Administration of China (CAC). The contents of this law affect both providers and users of deepfake technology and establish procedures throughout the lifecycle of the technology from creation to distribution.

These provisions require companies and people that use deep synthesis to create, duplicate, publish, or transfer information to obtain consent, verify identities, register records with the government, report illegal deepfakes, offer recourse mechanisms, provide watermark disclaimers, and more.

Canada

Canada’s approach to deepfake regulation features a three-pronged strategy that includes prevention, detection, and response. To prevent the creation and distribution of deepfakes, the Canadian government works to create public awareness about the technology and develop prevention tech. To detect deepfakes, the government has invested in research and development of deepfake detection technologies. In terms of response, the government is exploring new legislation that would make it illegal to create or distribute deepfakes with malicious intent.

Existing Canadian law bans the distribution of nonconsensual disclosure of intimate images.

Similar to California, the Canada Elections Act contains language that may apply to deepfakes. Canada has also made other efforts in the past to curb the negative impacts of deepfakes, including its “plan to safeguard Canada’s 2019 election” and the Critical Election Incident Public Protocol, a panel investigation process for deepfake incidents.

South Korea

With South Korea’s strong technological advancements, the country was one of the first to invest in AI research and regulatory exploration.

In January 2016, the South Korean government announced it would invest 1 trillion won ( about USD 750 million) in AI research over 5 years. In December 2019, South Korea announced its National Strategy for AI.

In 2020, South Korea passed a law that makes it illegal to distribute deepfakes that could “cause harm to public interest,” with offenders facing up to five years in prison or fines of up to 50 million won or approximately 43,000 USD.

Advocates push for South Korea to tackle digital pornography and sex crimes through additional measures, such as education, civil remedies, and recourse.

 

United Kingdom

The UK government has introduced several initiatives to address the threat of deepfakes, including funding research into deepfake detection technologies and partnering with industry and academic institutions to develop best practices for detecting and responding to deepfakes.

The UK has funded research and development to support and spread awareness about the harms of revenge or deepfake porn in its ENOUGH communications campaign. The UK hasn’t yet passed horizontal legislation banning the creation or distribution of deepfakes with malicious intent. However, in November last year, the UK announced that deepfake regulation would be included in its much-anticipated mammoth Online Safety Bill. This step is taken amidst the release of police data that roughly 1 in 14 adults in England and Wales experienced a threat to disseminating intimate images.

 

Overview of Other Countries’ Approaches

 

The following countries have invested in AI and/or deepfake research and development (R&D):

Note that the 27 member states in the European Union (EU) are subject to the deepfake regulations of the strengthened Code of Practice on Disinformation and will be subject to the upcoming EU AI Act, which will govern deepfake technology.

 
 

Signup to our newsletter

Subscribe to our Newsletter to get latest updates.