Facebook has the technological capability to fix fake news, according to its chief AI researcher

It’s a matter of ethics.
It’s a matter of ethics.
Image: Reuters/Dado Ruvic
We may earn a commission from links on this page.

Facebook’s head of AI research doesn’t think that filtering fake news on Facebook is a technological problem, but an ethical one.

In a meeting with reporters, Yann LeCun reportedly said, “The technology either exists or can be developed. But then the question is how does it make sense to deploy it? And this isn’t my department.”

LeCun did not elaborate on how technology would be implemented or if AI would be used. Industry experts have told Quartz that the problem of eradicating or attempting to verify false information on the internet is a more difficult task than possible for AI today. Some third parties have begun to make tools that filter out certain websites, or mark Facebook posts with a “Unverified” tag.

The issue of fake news goes “way beyond whether we can develop AI technology that solves the problem,” LeCun said. “They’re more like trade-offs that I’m not particularly well placed to determine. Like, what is the trade-off between filtering and censorship and free expression and decency and all that stuff, right?”

LeCun’s ability to pass the buck onto another team stems in part from Facebook’s organizational structure. LeCun is the director of Facebook AI Research (FAIR), a separate research arm that focuses on large, nebulous problems in AI. He reports directly to CTO Mike Schroepfer, and FAIR’s findings are funneled into Facebook’s various services through a separate group called the Applied Machine Learning team. The AML team would most likely be tasked with this challenge, assuming ethical problems were solved.

This kind of research lab has become common among large internet companies: Alphabet’s DeepMind aims to “solve intelligence,” while Salesforce Research has been working on making machines understand human language.

Facebook’s, however, seems to have the vaguest mission statement and largest sprawl in terms of research areas.  In an interview with Popular Science last year, LeCun said Facebook CEO Mark Zuckerberg’s original pitch for the lab was simple: Make it the best AI research department in the world. “Hmm, interesting challenge,” LeCun said.

A decision to develop a fake news filter would likely come from top-level officers at Facebook, or Zuckerberg himself. The founder has repeatedly downplayed Facebook’s responsibility to filter information that gets posted on the site.

“We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” Zuckerberg wrote in a Nov. 18 post.

Facebook spokespeople clarified to reporters that Facebook doesn’t have technology to combat fake news gathering dust on a shelf.