Tech platforms once again proved sluggish to respond when the shootings that left at least 49 dead at two mosques in New Zealand were broadcast live and then shared thousands of times.
Half a day after the gunman announced his intentions on Twitter and 8chan and then livestreamed his rampage on Facebook, the video and the shooter’s white-supremacist screed were still being uploaded, viewed and shared widely. (Extremism researchers caution against amplifying the gunman’s message and falling into a trap to generate coverage of his rantings).
“Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” said in a statement Facebook spokeswoman Mia Garlick. (NBC News reports it took the company about an hour and a half to remove the initial video).
Many copies of the footage remained on the platform as of late morning US Eastern time today (March 15). A spokesperson told Quartz that Facebook is seeking to delete all appearances of the video, through the work of its content moderators and artificial-intelligence systems.
Why some of the mosque shootings video were viewable many hours later
Some of the videos that were still online many hours after the shooting were covered by a “sensitivity screen” Facebook implements when content is graphic or disturbing but does not violate its policies. A Facebook spokesperson said that this could be the result of an automated response to the video, or individual decisions from moderators (who, it should be noted, are low-paid contract workers often traumatized by the barrage of disturbing content they see every day). The videos have since been removed.
Twitter, where the shooter posted his screed, and where it subsequently spread, told Quartz that it was “proactively working to remove the video content.” The company, which would not comment on individual accounts, stated that its hate-speech policy prohibits “behavior that targets individuals based on protected categories including race, ethnicity, national origin or religious affiliation.” This would include “references to violent events where protected groups have been the primary targets or victims,” which would presumably make the manifesto fall under this policy.
YouTube, which said it was removing the video, has been taking hours to take down reposted versions.
The social platforms have the capability to create special signatures—called “hashes”—for inappropriate videos, to make it easier for their systems to find copies and automatically take them down.
It can be easy to bypass the algorithms by altering the images, as the Washington Post notes. The companies have been more successful at catching other kinds of content, like terrorist propaganda.
Violent, graphic content continues to be an issue. Sometimes it is difficult to figure out if the content is legitimate—perhaps from a news report or a user documenting a tragedy in real time. There’s also the question of whether the companies are devoting enough resources to designing systems that would be able to discern and catch videos like the New Zealand one, and what to do before these systems are sophisticated enough to do so.
Facebook said it indeed hashed the first video, and the subsequent ones it has been finding. It said it was using AI to detect new videos, using computer vision to detect gory images, and audio technology where the images prove problematic to identify.
Livestreaming is another issue—from the very beginning, Facebook has had problems with its Facebook Live feature, which has been used to broadcast crimes, including murders, in real time.
The bigger ethical questions
With every mass attack, like the Parkland shooting, social media fills up with content that boosts perpetrators’ visibility. And each time, the platforms disappoint with their response.
We also have to think about the platforms’ role—and the internet at large—before violent events. Misinformation and hate are allowed to spread easily, with algorithms and filter bubbles actually working to strengthen extreme beliefs.
“In some ways, it felt like a first—an internet-native mass shooting, conceived and produced entirely within the irony-soaked discourse of modern extremism,” writes Kevin Roose at The New York Times.
The shooter was extremely well-versed in internet culture and tactics, and formulated his “manifesto” so that it would spread as wide as possible, and urged others to share it.
Which they did, both wittingly and not.