Facebook is facing down its latest crisis. On Tuesday (Oct. 5), whistleblower Frances Haugen told the US Senate during a hearing that Instagram is toxic for teenage girls. Her appearance follows disclosures by The Wall Street Journal this September revealing Facebook’s internal research, leaked by Haugen, found Instagram had toxic effects on teenage girls negatively affecting their body image. A presentation showed that 32% of teen girls said that Instagram made them feel worse about their bodies, the Journal disclosed.
To combat the bad headlines, Facebook deployed executive Nick Clegg on Sunday morning news shows to defend the social media giant. Clegg told CNN’s State of the Union that Facebook is experimenting with a feature encouraging young Instagram users to “take a break.” Although the feature was first mentioned by Instagram chief Adam Mosseri in a blog post this September, the company has not yet begun testing it. A Facebook spokesperson told Quartz the company is also testing another feature that prompts teens to explore other topics if they have been scrolling on one for too long.
Features that discourage or interrupt engagement with social media are now seen by some critics as an essential check on algorithms that often amplify anger, hate, misinformation, and propaganda while optimizing engagement above all else. Facebook is still likely months away from rolling out such measures, and it will be even longer until before they learn if it has any effect, but the announcement signals the company’s recognition that it can no longer ignore how its products harm some users, especially the most vulnerable.
Hooked on tech
Tech firms have started recognizing the need to help their users limit the use of their products. Apple introduced screen time limits in 2018 and Google followed suit soon after with its Digital Wellbeing feature on Android—tacit admissions that users need to foster healthier relationships with technology. Pinterest added “compassionate search” prompts in 2019 to provide mental health resources when users search about stress, anxiety, or other topics. And its executives have stressed that they want users to spend less time on the app, to help users find an idea or complete a task quicker.
But perhaps Twitter has done the most to introduce friction in recent years. Last week, the social media company started warning users that certain conversations may be “intense.” It also prompts users to read an article before sharing it—an attempt to make sure users don’t merely rage-share sensational headlines. Twitter previously experimented with defaulting to quote-tweets over retweets in an attempt to encourage conversations, but it backfired and Twitter admitted defeat.
Facebook’s announcement marks a new strategy for the social media giant. Facebook insists it wants to build safe experiences for young people online, but critics allege that it is just trying to capture a younger audience at a time when rivals like TikTok, Snapchat, and Discord have become increasingly popular. But under pressure from regulators, Facebook is making grudging concessions about how engaged those young people should be.
Platforms, including Facebook and Twitter, have experimented with removing engagement metrics and reducing sharing capabilities as a middle ground. YouTube often de-monetizes videos that break its rules but do not warrant full removals. But lies spread by former US president Donald Trump pushed social media companies to reconsider how they abet the spread of misinformation. Trump was banished by every major social platform for his role in encouraging the Jan. 6 insurrection.
Tech companies—now under pressure from global regulators—have had to rethink a once-sure strategy to promote engagement above all else. Even if that means telling users to log off once in a while.
Editor’s note: This story was updated to include a statement from Facebook.