To say that Facebook’s fact-checking efforts are going well would not pass the muster of any good fact-checker. Its external partners have said the system is inefficient. Some of them are getting brutally attacked online. Partisan bickering has also been an issue. And most importantly, sketchy news sources and fake stories continue to thrive on the platform.
Facebook’s executives, on the other hand, keep praising the program. And on Thursday (Sept. 13), the company announced that it would be expanding its fact-checking work to photographs and videos. In a post, Antonia Woodford, a product manager at Facebook, says it built a machine-learning model to identify potentially false images or clips. These get sent to one of Facebook’s 27 fact-checking partners who are based in 17 countries. These fact-checkers are expected to use techniques “such as reverse image searching and analyzing image metadata” to determine whether the content has been falsified.
“As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model,” Woodford writes. The company is also working on technological solutions to determine whether visual content had been manipulated (as is the case with “deepfake” videos like this, for example).
Manipulated images are a common way to the spread of misinformation, and hoaxers are getting more and more sophisticated with their methods, but text-based articles are hard enough to check. The current system is far from perfect, and now Facebook is piling on yet another, difficult ask.