Updated: April 25, 2017
Facebook has touted video—in particular live video—as key to the company’s future. But that commitment has unleashed a creature Facebook can’t handle.
A gruesome video of a man killing himself and his 11-month-old daughter in Thailand was posted live on Facebook on Monday (April 24), re-kindling the debate about the platform’s role in such incidents. Earlier this month a video showing the shooting of 74-year-old Robert Godwin Sr., which the killer himself posted on Facebook, caused an uproar loud enough to merit a mention by Mark Zuckerberg at F8 on April 18.
“We have a lot of work…we will keep doing all we can to prevent tragedies like this from happening,” the CEO said from the stage in a rare off-message aside at the annual developers conference in San Jose, California.
It’s a bold aspiration. So what can Facebook do to avoid giving criminals direct access to a massive audience?
The company first issued a short, generic response, getting slammed for the wording of the statement, which called the killing, within one sentence, both a “horrific crime” and “content,” and for the amount of time it took to take the video down. Critics also said the case highlights the issue of Facebook’s overall responsibility for violent images.
According to Facebook’s longer, more detailed statement, the shooter, Steve Stephens, posted several videos about the murder. The first stated his intent to shoot someone, the second showed the killing itself, and a third video in which he confessed his crime, which was broadcast using Facebook Live. The third video was reported to Facebook shortly after it was posted April 16, but the video of the shooting itself was not reported until 1.5 hours later. Twenty minutes after this report, Stephens’ account was disabled and the videos were no longer visible. Stephens was found dead after a manhunt on April 18, an apparent suicide.
The company said it would review its reporting flows “to be sure people can report videos and other material that violates our standards as easily and quickly as possible.”
Some experts and privacy advocates say there’s not all that much Facebook can do from a practical standpoint to prevent violent videos from being posted or broadcast. But the company clearly can make room to improve—particularly in considering the consequences of its innovations.
Daphne Keller, Director of Intermediary Liability at Stanford’s Center for Internet and Society, told Quartz Facebook’s turnaround time was actually quite fast. Keller worked for years as an attorney at Google, and said that having been “on the other side,” she witnessed the massive volume of user reports these companies get, and how many of the flags they get are simply wrong or not actionable. “I don’t think it’s realistic to do anything better.”
Last year, after a spate of violent incidents filmed on Facebook Live, Facebook told Quartz they would expand staff that monitors their live posts instead of relying largely on user reports. This approach has its limits, Keller says.
A live television broadcast, for instance, is controlled by producers who decide whether material is fit for air. For Facebook, thousands of workers (many of them outsourced abroad), already review flagged content, the company says. The staffing needed to effectively monitor live content is enormous. With Facebook Live, how many people would they have to hire?” Keller asks. “One per stream?”
What about automating some or all of the process through artificial intelligence? Facebook already uses the technology to take down child pornography, terrorist videos, or copyright-protected content.
But AI has its own issues, particularly with videos showing violence. For one, AI still has to work alongside humans to determine whether certain content is undesirable.
But there’s a larger issue: while images such as child pornography should be explicitly forbidden in all instances, there are times that violent videos are legitimately newsworthy or are crucial tools to raise awareness about abuse. Striking a balance on a systemic level is hard. Social media companies are put in a tough spot, Keller says. Users have concerns over disrespect for the dead and for seeing graphic, traumatizing images, along with worries about copycat behaviors. But those issues can bump against the use of Facebook as a platform for free expression. “Research shows us that when companies get in trouble for not taking something down, they overdo it,” Keller says.
Facebook was bashed last year for taking down the live footage of the aftermath of the killing of Minnesota motorist Philando Castile by a police officer. The willingness to cooperate with law enforcement to suspend the account of Korryn Gaines, an armed young mother who filmed her encounter by officers in Maryland’s Baltimore County and was shot, was also heavily criticized.
“On the internet there will always be issues of unsavory content—generally offensive one way or the other,”said Sophia Cope, a staff attorney at the Electronic Frontier Foundation, a digital-rights advocacy group.
Facebook is not legally responsible for violence depicted by its users, Cope says, and intervention to block content becomes a slippery slope. “These companies have established themselves as invaluable means of communication for anything from human rights to civil rights and liberties, or consumer rights,” she says. In order to make sure they are working in the public interest, social-media companies should always be completely transparent about their procedures for removing content, the EFF says. This however, has not been Facebook’s strong suit.
Cope compares the debate to an age-old discussion about broadcast television. In the TV (and radio world) live broadcasts are delayed in order for producers to quickly intervene in order to avoid profanity or violence. “There can be solutions like that, but is there anything lost in the process?” Aside for technical and manpower difficulties, if Facebook were to use something like a broadcast-style delay, wouldn’t its Live function lose some of its cachet—and become less democratic, instantaneous, and “raw, ” in the words of Zuckerberg himself?
The company was so focused and attracted to that uncensored nature of its service that it did not stop to consider its darker consequences, argues Will Oremus at Slate:
That it would be extremely hard for Facebook to prevent people from abusing its hot new product is not an excuse. It’s a feature of the system that Facebook built. And if the company’s reputation suffers because it can’t find better ways to handle it, that’s no one’s fault but its own.