On March 31, Italy became the first Western country to take action against an AI chatbot, ordering a temporary ban on ChatGPT. Italy’s data protection authority said that OpenAI, the creator of ChatGPT, collects personal data unlawfully.
“[T]here appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency wrote. It also cited ChatGPT’s tendency to make factual errors and the app’s lack of age verification.
Launched just five months ago, ChatGPT uses artificial intelligence to generate content in natural-sounding language. People are using it compose emails, draft essays, and write code. The frenzy around ChatGPT has prompted other companies to launch similar chatbots or to provide services that use such technology.
Italy’s deputy prime minister criticized the data protection agency’s decision. “I find the decision of the Guarantee of Privacy disproportionate,” Matteo Salvini, wrote on Instagram. “[W]e don’t need to be hypocrites: privacy issues affect almost all online services, common sense is needed.”
The ban comes amid growing worries about the risks of AI. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have signed a letter calling for a pause in training AI models more powerful than ChatGPT in order to develop and implement safety protocols.
It’s not clear whether Italy’s ban will have the intended effect. Generative AI feeds into Microsoft Bing’s chat as well as digital services like Slack and Zoom. Meanwhile, Google’s Bard, a rival to ChatGPT, doesn’t fall under the ban.
AI experts have urged lawmakers to ask rigorous questions about how the technology is being deployed and by whom. Aleksander Mądry, a professor of computing at the Massachusetts Institute of Technology, told a Congressional panel in March that even experts don’t understand exactly why ChatGPT generates what it does.
As a result, when it comes to regulating such systems, governments can’t design policies as they would for humans, according to Mądry. He also told the panel that because businesses often layer other AI on top of a base technology like ChatGPT, it’s tough to know who’s responsible when a problem results from the use of an AI service.
In a statement, OpenAI said that it’s working to reduce personal data in its AI training systems like ChatGPT and that AI regulation is necessary. “We look forward to working closely with [the Italian data agency] and educating them on how our systems are built and used,” the company said.