Facebook is asking users to identify “misleading language” in posts to curb fake news and clickbait

What is truth?
What is truth?
Image: Stephen Lam / Reuters
By
We may earn a commission from links on this page.

To combat its “fake news” problem, Facebook is turning to the very people that consume it—its users.

The social network is testing a new feature that lets users rank a news article based on whether it contains deceptive phrasing. When a piece appears in one’s News Feed, a bar below asks users to rank the headline for “misleading language” on a five-point scale, ranging from “Not at all” to “Completely.”

The feature was spotted by Chris Kewson of Billy Penn, a Philadelphia-based news startup.

It’s not clear how the feature might affect stories that are patently false versus ones with exaggerated headlines—both remain problematic for the social media company. A Facebook spokesperson confirmed to TechCrunch that the feature is an “official effort” but provided no additional details on how it works.

Facebook remains under fire from the public and the media for its perceived role in helping disseminate ”fake news” in the run-up to the US presidential election last month. Buzzfeed News reported that scores of individuals in locations as remote as Macedonia created fake news sites and profited from the stories’ viral spread on the platform. At times, Facebook users engaged with fake articles more than real ones.

Facebook CEO Mark Zuckerberg initially argued the company ought to distance itself from curbing misinformation on the platform, and avoid acting as an ”arbiter of truth.” But he has since announced the company is experimenting with features including third-party verification for reputable news sources and warnings on potentially fake stories.

These efforts, and the ranking tool described above, complement Facebook’s existing means for dealing with fake news, which primarily involve users manually reporting a story as false by clicking on the top-right of a specific post in the News Feed.

Some are suggesting Facebook take bolder measures, such as allowing third parties to devise their own algorithms users can choose from. All the while, researchers are trying to develop artificial intelligence tools that can detect trustworthiness—but that has proven difficult.

Human judgment and ranking tools might seem like imperfect ways to combat fake news, but they are perhaps the best ones for now.