Fake news, bots, and propaganda were hot topics at the World Economic Forum meeting in Davos last month, and Google executives there floated an intriguing idea to some fellow attendees—what if the company could tell users whether information is trustworthy before they shared it on social networks like Facebook and Twitter?
Representatives from Google and its parent company Alphabet eagerly discussed how the company can play a greater role in reducing misleading information online, several Davos attendees involved in and briefed on these conversations told Quartz. A notification system, perhaps via an optional extension for Google’s Chrome browser, was an idea that these people said was broached more than once. Such a browser-based system controlled by Google could alert users on Facebook’s or Twitter’s websites when they’re seeing or sharing a link deemed to be false or untrustworthy.
Right now, this appears to be merely an idea company executives are discussing, not a product in development. “We aren’t working on anything like this,” a Google spokeswoman told Quartz. But Alphabet did flag “misleading” information and “objectionable content” as risks to the company’s financial performance in its annual report this week, for the first time ever. And the fact that executives were focused on the topic at Davos indicates the tech company’s willingness to take a more active role in filtering out fake news and propaganda.
If successful, Google could become a bigger player in the fight against foreign influence in local political systems, battling campaigns like the Russian-state backed one that targeted Americans ahead of the last US presidential election, British voters before the Brexit vote, or the fake news that circulated in Kenya before its general election. The US government has failed so far to make any meaningful progress in that fight, and the 2018 midterm congressional elections may be influenced as well.
A misinformation detector
The comments by Google executives highlight potential flashpoints that could develop between Google and social media platforms including Facebook and Twitter. Tech companies have traditionally bristled at competitors’ interfering with how users access their sites, such as via browser software and extensions—as the Google executives were describing. Google years ago built and aggressively marketed Chrome in part to head off any attempt by Microsoft to use its Internet Explorer browser software to restrict users’ access to Google services.
Google’s new ad blocker in Chrome is an example of how the company can override other sites’ content for users of its browser—it will block out ads that don’t adhere to a coalition’s standards, starting Feb. 15. (Unlike the fake-news notifications discussed, that ad blocker will be turned on by default for Chrome users.)
Google executives at Davos were actively “thinking through how they can play a role in how they deal with fake news and bots,” said the CEO of one Silicon Valley company, who said he was involved in more than one conversation with them on the topic at Davos. Because hundreds of millions of people rely on its Chrome browser and Google’s search engine to find factual data, the company sits in a “unique position” to do so, the executive pointed out. “Their mission is to help organize the world’s information,” he said, so their involvement makes sense. (The Davos attendees who described the discussions with Google to Quartz requested anonymity because they were private conversations.)
Google executives talked about creating a browser extension that worked like a spell-checker, but was a “misinformation detector,” said another Davos attendee who participated in one such conversation.
The factuality issue
The idea presents some obvious hurdles—among them the question of who determines what is misinformation, which can involve individual judgment and political sensitivity.
“You’d have to take a stand about factuality,” said Timothy Snyder, a Yale history professor and author of On Tyranny, who attended the Davos conference and said he heard second-hand about Google executives’ fake news discussions. You can’t use a “protocol or an algorithm,” he said, you need “human beings who take responsibility for things.”
Last April, Google said it would try to improve its search results through efforts to demote low-quality content, such as “misleading information, unexpected offensive results, hoaxes, and unsupported conspiracy theories.” The company also recently revamped how it creates its “featured snippets” appearing at the top of search results to give more weight to high-quality information.
There are also some outside projects underway that Google could theoretically tap for assessments of links and sources. News Guard, a startup by publisher Steve Brill and former Wall Street Journal publisher Gordon Crovitz, has raised $6 million and is hiring dozens of journalists who will rate news content by trustworthiness. The Trust Project brings together a group of news organizations at Santa Clara University’s Markkula Center for Applied Ethics to create “trust indicators” that explain the “work behind a news story,” including journalists’ credentials and a publisher’s financial backing (Google already provides funding for this). Storyful and Moat have partnered with the City University of New York’s journalism school to create Open Brand Safety which identifies and tracks web domains and video URLs that spread misleading content.
Reliance on users
Another major hurdle is that an optional desktop browser extension is unlikely to impact the vast majority of Facebook usage. While Google’s Chrome browser is by far the world’s most popular, most of Facebook’s two billion users access the social media network through an app on their smart phones, as is common for most social-media platforms. And getting users to take the step of installing or activating an optional browser extension could prove a hurdle.
Perversely, identifying news as not trustworthy could also cause it to spread farther. Facebook has said that when it put red flags next to links to fake news, users clicked on them more—so Facebook discontinued that practice and started providing related links instead. “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs—the opposite effect to what we intended,” Facebook explained. A Facebook spokesman said he couldn’t comment on any potential Google plans.
To get such a system to actually work, “you have to make people feel good about passing on factual information,” Snyder said. Perhaps that means giving people who are very careful about passing on reliable information a positive social media rating, he said.
A last-ditch approach?
A final question is more philosophical. Can, and should, Google (which some critics say is already too powerful) play the role of filtering out what’s true and what is not?
“Of course it can’t,” said Rachel Botsman, a lecturer at Oxford’s Saïd Business School and author of Who Can You Trust, who attended the Davos conference. And, she added, “we don’t want it to.”
“People are really grasping at straws because no one has the solution here” for how to tackle fake news, Botsman said. Technology companies are hoping to avoid more regulations like the ones cropping up in Europe, she said. This summer Germany passed a law that will fine social media companies €50 million for failing to quickly remove hate speech on their platforms.
One theme Botsman said she heard again and again from tech companies at Davos was “Do you really want what’s happening in Germany coming down on the rest of the world?”