Scroll through Sam Altman’s Twitter page, and you’ll see a feed filled with photos of the OpenAI CEO posing with world leaders.
Altman has exchanged words with Indian prime minister Narendra Modi and sat down with South Korean president Yoon Suk Yeol. He’s traveled to Israel, Jordan, Qatar, and United Arab Emirates. And that’s all just this week.
These meetings come on the heels of Altman’s European world tour last month, during which he met with French president Emmanuel Macron and EU president Ursula von der Leyen.
Why all this schmoozing with world leaders? For one, the breadth of Altman’s world tour illustrates that he is determined to shape the debate on regulating AI following the release of OpenAI’s ChatGPT late last year. There’s also a real need to educate national leaders and lawmakers about AI. Altman, who holds the position of running a leading artificial intelligence company—one that’s helping usher in a new era in AI—is just the person to handle that task.
Of course, that’s not to say he’s the only CEO holding AI-related meetings with lawmakers. Sundar Pichai, Google’s CEO, met with EU regulators in Brussels in May to discuss the technology. Meanwhile, Anthropic CEO Dario Amodei met with US president Joe Biden in May to discuss the potential dangers of AI. Clearly, AI CEOs want to act.
It’s a reversal of a previous era of Big Tech during which tech CEOs—such as Pichai and Meta’s Mark Zuckerberg—had tended to remain on the sidelines rather than proactively engage with regulators.
The EU’s legislation would be the first in the world to regulate the use of AI. The proposed AI Act would classify AI systems into categories of various levels of risk. High-risk candidates, which include recruitment tools and medical devices, would face compliance such as data requirements. Meanwhile, those classified under “unacceptable risks,” such as social scoring—or risk profiles of individuals based on surveillance—would be prohibited. Even those considered to pose “minimal or no risk” must notify humans that they are interacting with an AI system unless it is evident, and labels must be applied to deepfakes.
It’s a fine balance: Regulate the technology too narrowly and you might miss out on certain harms; regulate it too broadly and you could stifle innovation, as Johann Laux, who studies legal implications of AI at the Oxford Internet Institute, said to Euronews. It’s a debate that’s likely to grow louder, as AI’s capabilities and influence grow. In the meantime, Altman is determined to shape regulation before it shapes him.