The OpenAI team responsible for AI's existential dangers has disbanded

With Ilya Sutskever's and Jan Leike's departures, the "superalignment" team is no more

We may earn a commission from links on this page.
Ilya Sutskever speaking
Ilya Sutskever, co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
Photo: JACK GUEZ / AFP (Getty Images)

The OpenAI team responsible for artificial intelligence’s existential dangers is no more, after co-founder and chief scientist Ilya Sutskever — and his co-lead of the company’s “superalignment” team Jan Leike — resigned on Tuesday.

OpenAI told Wired that it has disbanded the team, adding that its work will be absorbed by other research efforts across the company. The firm announced the superalignment team last July, with Sutskever and Leike at the helm.

Advertisement

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” OpenAI said at the time. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

Advertisement

The team was tasked with working on this problem, and OpenAI said it was dedicating 20% of its compute to the effort over the next four years.

Advertisement

Sutskever, who played a role in momentarily ousting OpenAI chief executive Sam Altman in November, wrote on X Tuesday that he made the decision to leave after almost a decade at the company. “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time,” he wrote.

On Friday, Leike shared a thread on X about his decision to leave the company, which he called “one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.”

Advertisement

Leike wrote that he joined OpenAI because he thought it “would be the best place in the world to do this research,” but that he has “been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.” He continued the thread, writing that OpenAI should prioritize safety as it pursues artificial general intelligence, or AGI.

In a response to Leike’s thread, Altman wrote on X he is “super appreciative” of Leike’s “contributions to openai’s alignment research and safety culture, and very sad to see him leave. he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”

Advertisement

Sutskever and Leike join a list of other OpenAI employees who have departed the company recently, including others from the superalignment team and researchers working on AI policy and governance. The announcements came a day after OpenAI revealed its newest multimodal AI chatbot, ChatGPT-4o.