Teens on Instagram will now see more warnings — and fewer predators
New direct messaging tools and default privacy settings aim to keep young users safer on Instagram and Facebook.

Photo by Beata Zawrzel/NurPhoto via Getty Images
Meta is stepping up its efforts to protect young users on Instagram and Facebook — and not a moment too soon. On Wednesday, the company announced a batch of new safety features for teen accounts, along with a major crackdown on predatory behavior across its platforms.
Suggested Reading
In one of its largest enforcement actions to date, Meta said it removed more than 600,000 accounts linked to exploitative activity. That includes 135,000 Instagram accounts that were leaving sexualized comments or requesting explicit images, often targeting accounts run by adults on behalf of children.
Related Content
New tools for a persistent problem
The company’s latest safety update is focused on preventing “unsafe or unwanted contact,” especially in direct messages. Teen users will now see more context about who they’re chatting with, like when an account was created, plus easy-access options to block and report someone in a single tap.
Meta says these tools are already making a difference. In June alone, teens blocked more than a million accounts and reported another million after seeing a “Safety Notice” alert.
There’s also a new “Location Notice” that lets users know if someone they are chatting with might be in another country — something scammers often try to hide. And Meta’s nudity protection tool, which automatically blurs suspected explicit images, is now on by default for teen accounts. So far, most users are keeping it that way: Meta says 99% have left the feature turned on, and nearly half choose not to forward nudity after getting a warning.
Kids’ accounts get more protection too
Meta is also extending some of these protections to adult-run accounts that primarily feature children, such as those managed by parents or talent reps. These accounts will now default to stricter settings to limit messages from strangers and filter out offensive comments. Meta will also make it harder for flagged adults to find or interact with these profiles at all.
The move comes as concerns about child exploitation online continue to grow. Platforms like Meta and Snapchat have faced increasing pressure from lawmakers and advocacy groups to do more to protect young users. The Kids Online Safety Act, which would require social platforms to prioritize kids’ well-being, was reintroduced in Congress earlier this year.
Meta says it's getting more aggressive
The company says it’s not just improving its tools — it’s also cracking down on bad actors at scale. Alongside the 135,000 takedowns for sexualized behavior, Meta removed another 500,000 Instagram and Facebook accounts linked to the original violators. And it shared that data with other tech companies through the Tech Coalition’s Lantern program to help prevent cross-platform abuse.
Meta has also been battling spam and impersonation this year, taking down around 10 million fake accounts that were pretending to be major content creators.
A step in the right direction?
While these new features and takedowns won’t instantly solve the platform’s safety challenges, they show Meta is paying closer attention, especially as public scrutiny heats up. Whether that’s enough to satisfy lawmakers or rebuild trust with users is still an open question.