Instagram has a bullying problem, and it’s trying new ways to stop hurtful and harassing comments.
The platform announced July 8 that it was introducing two new features to curb the issue, by nipping negative content in the bud, and isolating the bullies.
The first tool uses artificial intelligence to recognize users attempting to post offensive comments, flashing a pop-up screen that asks them to “keep Instagram a supportive place” and to perhaps rethink their post. “From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” Adam Mosseri, the head of Facebook-owned Instagram, wrote in a blog post. He specifically noted that the issue of bullying affects teenagers the most.
But if the first new feature fails, Instagram has another new tactic. The other new feature allows users to “restrict” a follower so that only the person posting abusive comments can see the comment they’ve posted, and no one else. The user can also approve individual comments from the restricted person. This is in effect a practice called “shadow banning“—limiting the reach of someone’s content without notifying them—which many users say social media platforms engage in routinely, but which the companies themselves deny.
“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mosseri wrote. “Some of these actions also make it difficult for a target to keep track of their bully’s behavior.” Instagram uses other AI-powered features to limit its bullying problem, like blocking comments intended to harass the user.