On the first week of the New Year, an edited clip that suggested former US vice president Joe Biden made racist remarks made the rounds on social media. As Iran launched missiles at two US military bases in Iraq on Tuesday evening (Jan. 7), Buzzfeed discovered multiple outdated photos and videos circulating on social media that falsely claimed to show the strike, including a five-year-old YouTube video from Ukraine.
Under Facebook’s current policies, all of these videos would be allowed to remain on the platform. Experts in AI and misinformation are concerned that Facebook’s new policy on deepfakes, which the platform released on Monday, doesn’t address the majority of misleading video footage that thrives on social media.
Facebook stated in its blog post that announced the policy that it would remove footage from the platform if it met the following two criteria:
“It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley, told Quartz that Facebook’s new policy was “too narrowly construed.” Neither the Biden video, nor an earlier video that was slowed down to make House Speaker Nancy Pelosi seem drunk, used the type of sophisticated AI technology targeted by Facebook’s new policy.
“These misleading videos were created using low-tech methods and did not rely on AI-based techniques, but were at least as misleading as a deepfake video of a leader purporting to say something that they didn’t. Why focus only on deepfakes and not the broader issue of intentionally misleading videos?,” asked Farid.
In a Wednesday hearing before the House Energy and Commerce Committee, Facebook vice president of global policy management Monika Bickert confirmed that the altered Pelosi video would not have been removed from feeds under the new deepfake ban. The platform faced criticism last year after it allowed the altered Pelosi video to remain on its site, though it did include a disclaimer informing users that the footage was fake.
Critics argue that Facebook’s policies don’t address the problem of “cheapfakes,” or the type of misleading video that anyone with free software and basic editing tools can make on their laptop. It also doesn’t cover the litany of videos circulated on Facebook that are mislabeled, out of date, or presented out of context. For example, the Facebook page News World, a fake news source, shared video of a pro-migrant protest, and falsely claimed it depicted Muslims attacking a church during mass.
In order to vet photos and videos, Facebook uses a combination of AI tools and human input from third-party fact-checking organizations. As the company explains in a blog post, its machine learning model uses a variety of clues, including feedback from Facebook users, to flag potentially false content. The flagged content is then reviewed by human fact-checkers.
In addition to fabricated videos, the human fact-checkers also regularly flag out-of-context videos, or those with misleading or false captions. But unless the videos violate Facebook’s community standards, they are allowed to remain on the site. Facebook will add a warning label that includes a link with “additional reporting” to any flagged videos.
Those practices will still apply to low-tech visual misinformation—still far more common than actual deepfakes, which use machine learning technology to depict a situation that didn’t occur (like this deepfake video of Mark Zuckerberg that was created by an advertising agency). Sam Gregory, the program director of Witness, a group working to further human rights with the use of video, told Quartz that the vast majority of visual misinformation he and his clients around the world have encountered in the last decade are these recontextualized and lightly edited videos.
Experts once warned that bad actors would soon use deepfake videos to depict world leaders and other public officials. That hasn’t happened yet. While the number of deepfake videos have nearly doubled since 2018, according to a report from last year, virtually all of them are pornography videos targeting female celebrities and other women.
“Facebook is responding here to a very technologically sophisticated problem that hasn’t yet become widespread—deepfakes—while ignoring the much larger issue of simply edited images and videos, which are already circulating rampantly,” wrote Samuel Woolley, digital propaganda expert and author of The Reality Game: How the Next Wave of Technology Will Break the Truth, in an email to Quartz.
But Woolley and other experts agreed that any step to combat visual misinformation by Facebook was promising, even though deepfakes have yet to become prevalent. Lawmakers at Wednesday’s House hearing echoed the need for a proactive approach by Facebook, but also agreed that a simple ban on deepfakes doesn’t go far enough.
Henry Ajder, the head of communications and research analysis at Deeptrace Labs, which released a study on deepfakes last year, applauded Facebook’s move. “Some people have criticized the move as responding to a “non-issue” on the platform at the moment, but anticipatory policy is very important to ensuring we are prepared for deepfakes and can refine policy implementation proactively, not reactively when damage may already have been done,” wrote Ajder in an email to Quartz.
Gregory added that he believes that Facebook’s “proactive” approach to deepfakes is superior to how it approached other misinformation crises. “I do think it’s important to take a proactive response on deepfakes before it’s a problem,” he said.
In the case of deepfakes, Gregory believes the real tipping point will be when the technology is readily available for mass consumption. Which could come soon: TechCrunch reported earlier this month that ByteDance, the owner of TikTok, bought a deepfake maker. Snapchat last month unveiled a Cameos feature that lets users insert their faces into cartoon video footage.
While it’s unlikely that such tools could be used for nefarious purposes, it’s a sure sign that such watered-down “deepfake” videos, much like the photo filters of recent years, will soon become ubiquitous on social media. Which will make believing what you see, with your own eyes, all the more difficult.