Snapchat’s My AI feature, launched on May 31, has left millions of its users anxious after it went rogue on Aug. 15. The bot, powered by OpenAI’s ChatGPT, posted a story of its own and then uncharacteristically stopped responding to any questions thereafter—an instance of how AI can sometimes hallucinate.
The My AI feature allows Snapchat users to send the bot snap texts of their own, in order to receive a response from the generative AI bot. In this fashion, the bot can converse with users and give them recommendations.
Earlier this week, though, the bot posted a flat image with two tones, which many users mistook for the ceilings of their homes. When, anxious about their privacy, they questioned the bot, the only response was: “Sorry, I encountered a technical issue.”
The story, which was later deleted, confused users about how Snapchat’s integrated AI feature actually operates. Snap officials were quick to allay concerns, calling the incident a glitch and not an instance of the bot working on its own. “My AI experienced a temporary outage, that’s now resolved,” a spokesperson told CNN.
When an AI spontaneously comes up with aberrant information or behavior, it is said to be “hallucinating.” Sundar Pichai, the CEO of Google, termed this aspect of AI a “black box” after one of the company’s AI models taught itself a new language. For businesses using generative AI, this unpredictability poses a problem, as Snap found out this week.
Snap has a history of breaching data protection rights
As a social platform, Snapchat has never been averse to introducing new features to users discreetly or, sometimes, even without their knowledge. In 2016, a cyber attacker posing as Snap CEO Evan Spiegel exposed the payroll data of around 700 employees of the company. The following year, Snap inadvertently revealed how it could install image recognition AI onto users’ devices without compromising on the app’s size and functionalities. In 2019, former Snap staff anonymously revealed that employees could access user details and content through an internal tool called SnapLion.
This week’s “glitch” drew furious reviews from users, who called for Snapchat to discard its My AI feature, which remains pinned to the top of the chat feed. Snapchat offers no options to remove or disable the feature. The bot had already sparked safety concerns days after its launch, when reviewers found that it sometimes responded inappropriately to messages. In response, Snap had added more safeguards and parental controls.
In a review released in June, barely a month after the bot’s launch, Snapchat said that more than 150 million people had already sent 10 billion messages to My AI, making it one of the most trafficked consumer chatbots in the world. But the AI “glitch” only emphasizes the data safety concerns around the bot, especially on how it uses the data that it collects everyday from millions of teenaged users.