Myanmar, Sri Lanka, the Philippines, India, are all countries where information spread on Facebook and the company’s other apps have been recently linked to deadly violence. There’s another one to add to the list: Nigeria.
A BBC investigation last week details how deceitful posts spread on Facebook have stoked ethnic hatred, which led to multiple deaths. The situation has gotten so bad in one area of the country, the central Plateau state, that authorities have had to resort to a surprising solution for the 21st Century.
One Nigerian army officer told the BBC his team had set up a hotline for locals to report misinformation, and that “the army is now using radio broadcasts to debunk false stories.”
Meanwhile, police officers have had to use their personal Facebook accounts to help expose rumors, the BBC reported. But the scale of the problem is apparently overwhelming.
Facebook’s own efforts to counteract the misinformation are inadequate, say locals. The fact-checking program that the company continuously touts, and which it launched in Nigeria in October, is far from sufficient, with only four fact-checkers for a country where 24 million people log into the platform monthly. None of them speaks Hausa, the language spoken by many Nigerians (Facebook said the partner organizations “support” Hausa, which means that they can get help from speakers of the language, like reporters within the same organization, when needed). Because of the lack of reliable information sources in the country, the fact-checking process, often requires more time than it would in other places, the BBC reported.
A Facebook spokesperson told Quartz in a statement that “to suggest that we are not fighting abuse on Facebook in Nigeria is misleading.” It said it’s using machine learning to find bad content and engaging with local organizations, and launched an online literacy program with 140 Nigerian schools (which the BBC points out, is a small fraction of all the schools in the country). The company announced earlier this year that it would remove misinformation from its platform if it had the potential to lead to real-world violence.