Google’s latest experimental language chatbot is getting a heavily guarded release.
At its annual I/O conference, Google announced an Android app that will give users access to Google’s latest artificial intelligence (AI) language model—LaMDA 2. The app, called AI Test Kitchen, will be rolling out in the US in the coming months, but not for everyone. After testing with thousands of Googlers, the next step is to open the app up on an invitation-only basis to select academics, researchers, and policymakers.
The hesitance to open the product up to public is understandable. After all, big tech’s experiments with AI tools have drawn flak on several occasions. In 2015, Google’s image tagging algorithm was discovered to be categorizing Black people as “gorillas.” In 2016, Twitter taught Microsoft’s chatbot Tay to be a sex-crazed, neo-Nazi. A second chatbot from Microsoft launched months after Tay’s disastrous debut, Zo, leaned too far into censorship. In 2018, Amazon scrapped an internal AI recruitment tool because it was biased against women.
Even now, several years later, Google expects some “offensive” text to slip through during testing stages, CEO Sundar Pichai said at the event. But while the backlash for mishaps that play out in public can be immense, attempts to restrict the flow of feedback likely could hamper the product itself.
“Because they are completely controlling what they are sharing, it’s only possible to get a skewed understanding of how the system works, since there is an over-reliance on the company to gatekeep what prompts are allowed and how the model is interacted with,” Deborah Raji, an AI researcher who specializes in audits and evaluations of AI models, told The Verge.
Eventually, it’ll have to debut in the public sphere. Currently being touted as a showcase for AI research, the conversational product has the potential to revolutionize Google’s core offering: search.