Last month, the New York Times revealed (paywall) that inappropriate videos of popular children’s characters were finding their ways into the YouTube Kids platform. A few weeks later, BuzzFeed uncovered unsettling clips that appeared to endanger children. Now, the Times (paywall) in London has found that predators are using the streaming-video platform to exploit kids.
It’s a troubling end to a troubling year on the internet.
The UK publication found multiple accounts that posted disturbing videos of young girls to the Google-owned service. Videos from one channel in Brazil showed children “standing silently, licking their lips, or dancing,” the report said, and directed viewers to an email address. When contacted by an undercover reporter, the person claimed to have more than 300GB of material showing “naked” children.
Another account under a username that had reportedly been flagged to US and Canadian child-abuse authorities encouraged viewers, in its profile, to exchange explicit content via an encrypted chat application called Telegram, the publication wrote.
YouTube shut down these channels and others after they were flagged by The Times. It also reported them to the National Center for Missing and Exploited Children, which collaborates with law enforcement to help keep children safe from sexual predators. All but one of the channels flagged were created in the the last two weeks.
“Content that endangers children is abhorrent and we have clear policies prohibiting it on YouTube,” a YouTube spokesperson told Quartz. “When we become aware of new and evolving patterns of abuse, we take swift action in line with our policies, terminating channels and reporting illegal child endangerment where applicable. These cases are then used to train our machine learning technology and teams to find bad actors more quickly in future. We’re wholly committed to getting this right.”
The company has been hiring more people to review this type of material on the site, too.
Inappropriate content like this has been prevalent throughout the internet since its inception. And it isn’t going away, as evidenced by this year’s scandals. The enormous scale of these platforms has made them difficult to police. People watch one billion of hours of video on YouTube each day. And 400 hours of content (paywall) are reportedly uploaded to the site every minute.
And YouTube isn’t the only social-media site struggling with abuse on its platform. Facebook, which has been called out by child-protection groups in the past as well, is ground zero for the web’s fake-news crisis and has been battling it for over a year. It receives more than a million reports from users of potentially objectionable content each day, the Wall Street Journal reported.
Humans are often called upon to review that material that gets flagged on these platforms. YouTube will have 10,000 moderators reviewing its content next year.
It’s grueling work. Former content moderators at tech companies like Facebook and Google told the Journal some people couldn’t last a day in the job. Some of the most brutal images and videos were of war victims and animal cruelty. And sexual abuse images involving children were the most taxing, said a former moderator at Google.
“The worst part is knowing some of this happened to real people,” he said.