Does YouTube favor radicalization? From outside YouTube, it’s hard to know

Algorithmic mysteries.
Algorithmic mysteries.
Image: Reuters/Dado Ruvic
By
We may earn a commission from links on this page.

A new paper claims that contrary to countless anecdotal examples and experiments, YouTube’s algorithm does not lead users down a path to far-right political radicalization.

What’s more, the authors, Mark Ledwich, a software engineer in Australia, and Anna Zaitsev, a post-doctoral scholar at Berkeley University, say that the algorithm directs users to more “left-leaning” mainstream sources.

But there’s a big problem with these claims, and the study, which was published on arXiv, a website where researchers can publish their work before it has been peer-reviewed, immediately drew criticism from many experts on online propaganda. The paper, argues Princeton professor and computer scientist Arvind Narayanan, is not just wrong: its design is based on a flawed premise.

The study’s algorithmic rabbit holes were generic, and not personalized to a specific user, which is what makes social media algorithms so powerful. Here’s how Narayanan explains the issue on Twitter:

Multiple well-established social media researchers retweeted Narayanan’s rebuttal, some using stronger words: “LOL to anyone pretending to study recommendation algorithms without being logged in,” tweeted Zeynep Tufekci, professor at the University of North Carolina, whose work the study criticizes.

The authors of the study acknowledge that their work is limited by not using specific user data, but they also write they do not believe “that there is a drastic difference in the behavior of the algorithm” between a logged-in account and an anonymous account. They add that their confidence in the similarity between the two “is due to the description of the algorithm provided by the developers of the YouTube algorithm.”

Ledwich responded to some of the criticisms on Twitter, and both researchers expanded in a Medium post addressed to Narayanan.

In the post, Ledwich and Zaitsev write that they have “real-world feedback” that confirms their findings, citing conspiracy theorists complaining that the algorithm is funneling traffic to Fox News.

“We think the purely empirical quantification of YouTubes’ recommendations is meaningful and useful. We believe that studying the algorithm might help inform more qualitative research on radicalization.”

Narayanan and others said such an exercise was ultimately pointless:

Ledwich and Zaitsev specifically criticize the reporting of The New York Times about YouTube radicalization. In one story, Times reporter Kevin Roose analyzed the YouTube use of one man who definitely became radicalized by the platform. Roose tweeted in response to the study:

And he pointed to a crucial problem: data on these personal YouTube paths are not accessible to researchers and journalists.

One way to help analyze YouTube radicalization in a more systematic way would be for the platform to disclose how many times its recommendation engine recommends any video. YouTube did not respond to a request for comment about whether it would make this kind of data available, and did not comment on the contents of the paper itself. In the past, however, it has said that it designed its systems to show content from authoritative sources.

In October we reported that a Chinese propaganda video about the Hong Kong protests was viewed more than the most-watched videos on the topic from the BBC, the New York Times, and the Wall Street Journal. What’s more, YouTube apparently recommended that video to people interested in “Hong Kong protests” six times more often than an average video in that search. That’s according to AlgoTransparency, a site built by a former Google engineer that uses multiple logins in an attempt to understand YouTube’s recommendations.

There are potentially other ways of looking at the problem, but it’s clear there’s just not enough good data to make such sweeping statements as authors of the arXiv study did.

And, as Becca Lewis, a Stanford researcher who studies political influencers notes, often the content of the videos is enough for us to understand how the mechanisms of online rabbit holes: