Just a few days ago, Sam Altman asked US lawmakers to regulate artificial intelligence.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he told the Senate Judiciary Committee on May 16.
During his testimony, Altman, the CEO of OpenAI, proposed that Congress pass legislation to create a new federal agency to regulate AI. He suggested a licensing regime to determine who can develop and use large language models—like GPT-4, the one his company created—as well as independent auditing and testing requirements for this software.
GPT-4 and its applications—namely, the chatbot ChatGPT and image-generator DALL-E—have made OpenAI the torchbearer in AI and Altman a posterboy for the technology. So, his call for regulation carried weight.
Altman enchanted lawmakers in Washington, schmoozing with them at a private event hours before his testimony, which was warmly received for typically tech-skeptical legislators.
But Altman’s charm offensive didn’t translate across the Atlantic. And now his call for regulation has been muddled by the reality that, well, he didn’t really want regulation after all. At least, not the kind European Union lawmakers has in mind.
The European Union is much closer to passing AI legislation than the US. But Altman is not a fan of their proposed legislation. Altman told Reuters on Wednesday (May 24) that the current draft of the EU’s AI Act would be “over-regulating.”
The AI Act, which has been in the works for two years, would categorize different AI tools based on risk—including banning those deemed to pose the highest risk. The higher the assessed risk, the more compliance work its makers will have to do to assure the EU it’s safe. In recent months, as ChatGPT has become a global sensation, lawmakers have added new requirements to the bill, adding liability for the creators of large language models like OpenAI’s GPT-4 or Google’s PaLM for how people use their models, the Financial Times reported. It would also force companies to publish lists of copyrighted materials that they train their models on.
Kim van Sparrentak, a Dutch lawmaker in the European Parliament, said she and her colleagues “shouldn’t let ourselves be blackmailed by American companies,” Reuters reported.
Altman, who has been on a whirlwind trip across Europe that started in Lisbon, Portugal, for a secretive Bilderberg Meeting, and took him across Germany, Poland, the UK, Spain, and France, backtracked from his threat to leave Europe on Friday (May 26). Reflecting on the “very productive week of conversations,” Altman downplayed his previous statements, saying he has “no plans to leave” Europe.
Altman wasn’t the only AI leader on a European tour this week. Google CEO Sundar Pichai, was also visiting the continent for the same reason—to win favor with EU lawmakers over AI.
Pichai has taken a different approach to courting regulator support in Europe, pushing for a stopgap measure called the AI Pact—a voluntary agreement for companies developing AI to bridge the gap between the present and when the EU’s new law is passed and goes into effect. It’s not clear at this time what the pact would entail, but it demonstrates that Pichai and Altman are taking different paths in their approach to quelling the EU’s concerns.
The reality, however, is simple. As much as technology companies plead with lawmakers to regulate them, they don’t really want to be regulated. Why would they? The only time a company asks to be regulated is when it knows that regulation is coming—it just wants to be the ones to shape that regulation so it advantages them over competitors.