The World Economic Forum is treating fake news as an urgent matter of global human rights

“We’re all bankers now.”
“We’re all bankers now.”
Image: Reuters/Ruben Sprich
By
We may earn a commission from links on this page.

On Jan. 17, more than 100 people crowded into the Kirchner Museum in Davos, Switzerland, for a panel on the ailing state of global trust. A just-released survey by communications firm Edelman documented the largest-ever drop of trust in business, government, the media, and NGOs. Richard Edelman, the firm’s president, compared the precipitous slide to the decline in trust seen in 2009, amid the depths of the financial crisis. “We’re all bankers now,” a moderator from the Financial Times said, to nervous laughter.

The Edelman panel was one of several discussions at last week’s annual meeting of the World Economic Forum on the loss of trust in institutions, and the spread of misinformation and fake news. One panel on fake news later in the week included representatives from Google and Facebook; another discussion took place at an off-the-record roundtable held by Microsoft. Inevitably, the conversations turned to Facebook, the company that has been most widely criticized for helping hoaxes and inflammatory content to spread.

“There’s an atmosphere of propaganda, lies, and fraud,” said Jeff Jarvis, a journalism professor at City University of New York who ran two of the events on fake news. “It was a recognition that there’s a problem.”

While Davos concluded last week, WEF is set to continue these conversations through its Global Future Council on Human Rights. Over the next two years, the 20-plus person group will look at how large internet companies are governed, especially where privacy and free speech are concerned. The council will examine how algorithms can act as a “form of de facto governance,” co-chair Michael Posner told Quartz, as well as how best to limit the spread of misinformation, political propaganda, and extremist content online.

“There’s a growing concern about the gap between what’s true on the internet and what’s not, and how the internet and these social networks and platforms are affecting both political and social interactions,” said Posner, a former member of the Obama administration and professor at New York University’s Stern School of Business.

Internet companies are increasingly among the largest and most powerful in the world. For a brief moment last August, the five most valuable companies in the S&P 500 were all technology firms. As of last week, Apple still topped that list (market cap: $631 billion), followed by Alphabet ($522 billion) and Microsoft ($486 billion). Amazon ($384 billion) and Facebook ($299 billion) rounded out the top 10.

Members of the Global Future Council on Human Rights include Daniel Bross, former senior director of corporate citizenship at Microsoft, and Andrew McLaughlin, a former director of global public policy at Google. Posner said WEF has reached out to companies including Facebook, Google, and Microsoft. “We’re eager to work with them,” he said.

Facebook, Google, and Microsoft declined to comment.

For its 2016 meeting in Davos, WEF chose the so-called Fourth Industrial Revolution—“the possibilities of billions of people connected by mobile devices”—as its theme. These technologies have made life vastly more convenient for some (a taxi at the touch of a button!) while also creating plenty of problems. Middle-skill jobs are declining, inequality worsening, and misinformation and extreme ideas going viral more quickly. Governments and regulators are struggling to keep up.

WEF began talking about how to build on the theme last summer, and outlined a proposal during a mid-November meeting in Dubai. The mood at that gathering was uneasy. Attendees worried technology had worsened political polarization and made it easier for misinformation to flourish. Donald Trump’s upset in the US presidential election was viewed as the first “earth-shaking impact of this kind of bias,” Arun Sundararajan, a professor at NYU Stern who attended the meeting, told Quartz.

In mid-November, a report from BuzzFeed found that, on Facebook, hoaxes and posts on hyperpartisan blogs outperformed stories from mainstream news outlets in the three months leading up to the US election. Facebook has said it takes fake news seriously, but shouldn’t have to be an “arbiter of truth.” Last month, the company outlined steps it is taking to eliminate the “worst of the worst” bogus content from its platform.

In Davos, Jarvis said discussions of fake news would typically begin with sharp criticism of the social media platform—“this is awful, can’t Facebook do something about it”—before turning into a conversation about fake news and failing trust in media as “a larger issue.” He said awareness of the steps Facebook is already taking on hoaxes and propaganda was “limited.”

Meanwhile, Facebook CEO Mark Zuckerberg has hired the men who got Barack Obama and George W. Bush elected and plans to tour about 30 US states in 2017, raising questions about his own political ambitions. (Last week, he met with pastors in Waco, Texas.) A world in which Zuckerberg pursues public office while also retaining control of his company is one where matters of data privacy, free speech, and the governance of powerful tech companies become even more important.

Internet companies have generally recoiled at efforts to regulate them. Google has spent years fighting Europe’s 2014 “right to be forgotten” ruling, which lets European Union residents request the removal of ”inadequate” or “irrelevant” links about themselves for searches done within an EU country. Google complies with the EU ruling, but in May 2016 appealed a decision in France that would require links delisted under the right to be forgotten to be removed from Google searches in every country in the world.

“This order could lead to a global race to the bottom, harming access to information that is perfectly lawful to view in one’s own country,” Google general counsel Kent Walker wrote that month.

WEF’s Posner said internet companies are “rightfully nervous” about governments deciding what information is available. His council will consider how companies could instead police themselves, and to what extent they already do, through algorithms and other content filters. “By virtue of the fact that governments are not dictating this, the governance falls to the companies,” Posner said.