TikTok could be the next social platform plagued with disinformation

A man holding a phone walks past a sign of Chinese company ByteDance’s app TikTok, known locally as Douyin, at the International Artificial Products Expo…
A man holding a phone walks past a sign of Chinese company ByteDance’s app TikTok, known locally as Douyin, at the International Artificial Products Expo…
Image: Reuters
We may earn a commission from links on this page.

TikTok is all fun and memes, until it isn’t. On Friday (Nov. 1), Reuters reported that the US government opened a probe into the merger that effectively landed the Chinese-owned app in the United States, the $1 billion acquisition of the American app Earlier this month, US lawmakers called for such an investigation, worried about the country’s cybersecurity, and Chinese censorship on the platform.

Indeed, there are reasons for concern when it comes to political content on the app. Political videos are increasingly popular on TikTok, and they’re often harmless. But posts that promote mistruths, hate, and even violence do sneak in. And researchers say the way the platform is set up makes it fertile ground for those who’d want to spread that kind of content.

While no notable disinformation campaign geared toward the 2020 election has yet been found on TikTok, #Trump2020 content is thriving, peppered with falsities and conspiracy theorists hoping to disseminate their views. In India, TikTok, wary of sketchy content, introduced a disclaimer on political posts warning against fake news. The Wall Street Journal recently reported that ISIS had been using the platform to promote its messages. Finally, there is the Chinese ownership question US lawmakers are fretting about: In Hong Kong, the company appeared to have censored content related to the pro-democracy, anti-Chinese-government protests on the ground.

TikTok says it is preparing to deal with the problem of disinformation, and is working on its policies and enforcement mechanisms. And it is adamant that it is independent of the Chinese government.

Meanwhile, researchers are starting to look at the platform, and see areas where the app may be vulnerable.

The app’s structure is one issue. For example, anyone can make a video based on the audio embedded in an existing clip—someone else made the soundtrack, you just add your footage.

“This is a pre-fab viral meme generation system,” said disinformation researcher Cameron Hickey. “If one person does something specific in their video in relation to the audio, other users can essentially remix the idea for their own messaging. A user can turn a benign meme based on a song/dance combo into a political message effortlessly.”

And finding the original memer can be tough, said Satnam Narang, senior research engineer at cybersecurity company Tenable, who has researched how TikTok is used for adult dating scams. Users can simply take the audio without any credit to the original creator, and start the meme chain all over again. With some exceptions, time stamps on the videos are not apparent for the average user. All of this obscures how content spreads on the platform.

One of the two main ways users interact with the app is through the “For You” feed, which is governed by an algorithm. Serendipitous discovery is a key way of finding content, Hickey said, contrary to, for example YouTube, where dangerous rabbit holes often start with a benign clip that the user sought out in the first place. If you interact with a certain kind of video, you’ll start seeing more and more similar ones, getting trapped in a filter bubble, or what Vice called “algorithmic hell.”

Narang says there’s a lot of luck involved to get featured on the For You page, but users do try to game the system. They tag their content #foryou or #fyp, hoping the algorithm will place them in front of many people as possible, a tactic that the scammers he’s studied had used. Just make the content, and wait for the algorithm to spread it—unlike Instagram or Facebook, where to get noticed you should be engaging with others.

On these platforms, it’s easier to spot scammers, spammers, or spambots—they are the random commenters you see trying to direct users to their account or site.

On TikTok, they are more hidden—or hidden in plain sight.

Narang found that scammers create impersonator accounts. They find a popular creator, and copy their content. After a while they can use this replicated content to direct followers to outside links. This tactic mirrors one called “keyword squatting” used to spread disinformation on Facebook. Bad actors create Facebook groups related to a topic popular in a given moment, and then suddenly change the group’s purpose to a political one.

The TikTok audience skews younger, and users might place more trust in the platform because it hasn’t been overrun by bad content yet, Narang said. “The reason why you see so many people following and engaging with impersonation accounts is that they actually genuinely believe that there are talking to those people.”

The other issue with TikTok’s younger audience is that on TikTok, entertainment becomes mixed with politics, and there’s a danger users will treat the two in the same way—consuming political messages alongside dance memes.

“As a relatively new platform whose origins and soul revolve around fun and entertainment, we are focused on a thoughtful and consistent approach to our ever-expanding range of content,” the company told Quartz in a statement. “We are setting up an external committee, as well as further bolstering our internal capabilities to protect users from unsafe content and disinformation in a responsible way.”


Let us know if you know more.

This story is from Quartz’s investigations team, which is covering misinformation online ahead of the 2020 election. Here’s how you can reach us with feedback or tips:

Email (insecure):
Signal (secure): +1 929 202 9229
Secure Drop (secure & anonymous):

Be the first to know.

Sign up for the Quartz investigations email and get updates 2-3 times per month.