A new app flags the old tweets that could get you fired

A one-stop shop.
A one-stop shop.
Image: REUTERS/Dado Ruvic
We may earn a commission from links on this page.

It is entirely possible that the words that will end your career prospects for the foreseeable future have been written already, and by your own hand. Plenty of promising job candidates have had their hopes scuppered by a potential employer’s discovery of their foul, ignorant, or otherwise offensive social media posts—posts that may have been written during a younger, more foolish phase, but that the candidates must still bear responsibility for.

A wise job candidate is a vigilant curator of their own social media history. One web developer has created a tool to help them do it. is a web-based application that, for $2.99, will scan the user’s Twitter history and flag any tweets containing swear words, racially or sexually offensive terms, and other potentially problematic words. It was created by August Van De Ven, an 18-year-old Dutch web developer whose other projects include, which generates fictional client briefs for novice graphic designers to use as fodder when practicing logo design.

“I got the idea after I noticed that there were various people that got in trouble after someone else dug up some of their old tweets,” he told Quartz At Work. “The app makes you aware of these tweets so you don’t have to go through or delete all of your [Twitter history.]”

The bot is simple in design—users concerned about their public profile could do a similar search for free using Twitter’s own search functions, though the app is faster.

Out of curiosity, I ran the 6,258 tweets I’ve posted since March 2011 through the service. Within seconds, it returned a surprisingly long list of tweets containing potentially objectionable words. Every tweet containing the word “rape” was highlighted, as well as those containing the words “idiot,” “asshole,” “fuck,” “penis,” and various colloquialisms referring to the latter. I was surprised to see that in 2014 I tweeted the words “Dick pics”—was there some late-night internet search I’d forgotten about?—but upon review, I stand by my statement.

A flagged tweet isn’t always problematic. The bot can spot questionable terms, but it can’t interpret their context, highlighting a tweet about Moby Dick alongside one describing the scheming butler on Downton Abbey by that pejorative. It also seemed concerned only with words I’d written myself, not with content I retweeted or linked to. If I wrote about campus rape in a tweet, for example, the bot flagged it; if I retweeted without comment a post mentioning that subject, it didn’t.

So if you are a person who doesn’t write bigoted, homophobic, or sexually offensive content yourself, but just likes to amplify the work of all of your many Internet friends who do, cannot help you. And it probably shouldn’t.