Skip to navigationSkip to content
AP Photo/Chris Carlson
Seeking justice
SPREADING TERROR

The father of a Paris attack victim is suing Facebook, Google, and Twitter

By Ananya Bhattacharya

Nohemi Gonzalez was one of over 130 victims in the November 2015 Paris massacre. The father of the 23-year-old, Reynaldo Gonzalez, wants to hold three tech giants accountable for her death.

Reynaldo is suing Twitter, Facebook, and Google for providing “material support” to the Islamic State and other extremist groups. In the lawsuit filed on Tuesday (June 14) in the District Court of Northern California, the California student’s father alleges that these sites “knowingly permitted the terrorist group ISIS to use their social networks as a tool for spreading extremist propaganda, raising funds and attracting new recruits.”

Twitter was targeted in a similar lawsuit in January 2016 by the widow of an American killed in a November 2015 ISIS attack in Jordan. The social network saw a surge in ISIS-related accounts in 2014, with around 46,000 accounts being created between September and December that year. But it’s been clamping down on questionable accounts, shutting down over 125,000 of them in February 2016. The company said it believes Gonzalez’s lawsuit is “without merit,” issuing the following statement:

“Twitter strongly condemns the ongoing acts of violence for which ISIS claims credit, and our sympathies go out to those impacted by these acts of terror. We have partnered with others in industry, NGOs and governments to find better ways to combat the online manifestations of the larger societal problem at the core of violent extremism.”

Under US law, internet companies are usually immune to being sued for content posted by their users. Section 230 of the 1996 Communications Decency Act (CDA) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

However, Keith Altman, the plaintiff’s lawyer, argues that their complaint is not about the content. Rather, it alleges a violation of the federal Anti-Terrorism Act. Altman says the companies give extremist organizations the infrastructure to have a voice on the internet. He also alleges that Google profited from ads placed before YouTube videos posted by ISIS affiliated channels.

At the time of the ads controversy, Google said it does not have an algorithm that can stop ads from appearing next to bad YouTube videos. “If these companies spend so much time tracking each individual user’s information and targeting ads, why can’t they put a fraction of that effort into identifying terrorist usage of the network?” Altman asks.

Like Twitter, YouTube has been cracking down on ISIS channels. There is also a “promotes terrorism” flag underneath each YouTube video. Flagged videos are reviewed 24 hours a day and materials violating policies are deleted. A Google spokesperson said:

“Our hearts go out to the victims of terrorism and their families everywhere. While we cannot comment on pending litigation, YouTube has a strong track record of taking swift action against terrorist content. We have clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies.”

Facebook too relies on evaluating materials flagged by users but it has also started digging through news reports and asking authorities for names of suspects in order to remove their accounts. Using terrorist’s profiles, Facebook is also trying to identify and delete associated accounts that show signs of supporting terrorism, the Wall Street Journal reported. The Menlo Park-based company issued the following statement:

“We extend our deepest sympathy to those affected by terror attacks. There is no place for terrorists or content that promotes or supports terrorism on Facebook, and we work aggressively to remove such content as soon as we become aware of it. Anyone can report terrorist accounts or content to us, and our global team responds to these reports quickly around the clock. If we see evidence of a threat of imminent harm or a terror attack, we reach out to law enforcement. This lawsuit is without merit and we will defend ourselves.”