China’s cyberspace regulator is cracking down on deepfakes.
Starting tomorrow (Jan. 10), deep synthesis providers–content providers that alter text, audio, images, and video—in China will have to abide by a new set of rules, according to the Cyberspace Administration of China (CAC).
“In recent years, deep synthesis technology has developed rapidly. While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honor, and to counterfeit others’ identities,” the CAC said.
Deepfakes are made by manipulating images and videos using artificial intelligence to create content that looks real, but isn’t. For instance, a film star or politician’s face could be superimposed on an existing video to misrepresent them.
At the start of 2020, China made it a criminal offense to distribute deepfakes without disclosure. Now, it’s adding more layers to the regulation, including consent and accountability, on top of transparency.
The newly-established regulation, the first version of which was opened for public comment a year ago, goes a few steps further in trying to protect people’s likeness from being impersonated without their consent. It’s being touted as a tool for social stability. The flip side of that coin is essentially censorship. Among other things, this probably means no more Xi Jinping-Winnie The Pooh deepfakes.
What are China’s new rules around deepfakes?
Laid out in a Dec. 11, 2022 document issued by the CAC, the rules, titled “Provisions on the Administration of Deep Synthesis of Internet-based Information Services,” broadly say:
🤝 Companies have to get consent from individuals before making a deepfake of them, and they must authenticate users’ real identities.
🥊 The service providers must establish and improve rumor refutation mechanisms.
⚖️ The deepfakes created can’t be used to engage in activities prohibited by laws and administrative regulations.
🕴️ Providers of deep synthesis services must add a signature or watermark to show the work is a synthetic one to avoid public confusion or misidentification.
Why not ban deepfakes entirely?
There are several positive applications of the technology. A Canadian company called Lyrebird is helping people with ALS, also known as Lou Gehrig’s disease, clone their voice to use it once the disease has claimed their ability to speak. There are other potential use cases to help those with impairments hear and see better.
A joint endeavor between UNICEF and MIT draws on the characteristics of Syrian neighborhoods affected by conflict to simulate how cities around the world would look amid a similar conflict, creating synthetic war-torn images of Boston, London and other key cities around the world to increase empathy for victims of a disaster region.
The cultural space has also found good uses for the technology: The Dalí Museum in St. Petersburg, Florida, created a Salvador Dalí for patrons to interact with, and Samsung’s AI lab in Moscow breathed life into the Mona Lisa. It’s also the tech that rapper Snoop Dogg used to digitally resurrect Tupac, who died at the age of 25 in 1996, for a music video.
In China, Xinhua, the country’s state-run news agency, has experimented with digitally-generated news anchors. Wang Xiaochuan, the head of the Chinese search engine Sogou, which helped Xinhua create the tech, also describes a future where “it could be your parents” telling a bedtime story.
Experts call for more collaborative deepfake monitoring
“Governments, in particular, could make it easier for social media platforms to share information about deepfakes with each other, news agencies, and nongovernmental watchdogs. A deepfakes information sharing act, akin to the US Cybersecurity Information Sharing Act of 2015, for example, could allow platforms to alert each other to a malicious deepfake before it spreads to other platforms and alert news agencies before the deepfake makes it into the mainstream news cycle.” —Charlotte Stanton, former fellow in thinktank Carnegie Endowment for International Peace’s Technology and International Affairs Program
The reality making deepfakes in Xi Jinping’s China
China’s internet censorship system, referred to as the Great Firewall, has been around for more than two decades. Under president Xi Jinping, restrictions have only gotten tighter. The ongoing tech crackdown and its fallout is proof of the state’s stronghold on the digital environment, and the latest rules appear to be yet another tool of control and coercion.
In particular, one clause demands that deepfake makers and distributors adhere to the law and politics of the country, automatically putting any content that goes against Xi Jinping’s ruling party’s ideals in tricky territory. The burden to comply with transparency and disclosure requirements is herculean, possibly leading apps to shut down instead of toying with the new rules.