Now, you might wonder why Microsoft would unleash a bot upon the world that was so unhinged. Well it looks like the company just underestimated how unpleasant many people are on social media.

It’s unclear how much Tay “learned” from the hateful attitudes—many were the result of other users goading it into making the offensive remarks. In some instances, people commanded the bot to repeat racist slurs verbatim:

Microsoft has since removed many of the offensive tweets and blocked users who spurred them.

The bot is also apparently being reprogrammed. It signed off Twitter shortly after midnight on Thursday and the company has not said when it will return.

Image for article titled Microsoft’s AI millennial chatbot became a racist jerk after less than a day on Twitter

A Microsoft spokesperson declined to confirm the legitimacy of any tweets, but offered Quartz this comment:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

The debacle is a prime example of how humans can corrupt technology, a truth that grows more disconcerting as artificial intelligence advances. Talking to artificially-intelligent beings is like speaking to children—even inappropriate comments made in jest can have profound influences.

The bulk of Tay’s non-hateful tweets were actually pretty funny, albeit confusing and often irrelevant to the topic of conversation. The bot repeatedly asked people to send it selfies, professed its love for everyone, and demonstrated its impressive knowledge of decade-old slang.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.