OpenAI has a new safety committee— and of course it includes Sam Altman

The announcement comes just weeks after the company's AI existential dangers team was shuttered

We may earn a commission from links on this page.
Sam Altman
Photo: Chris Ratcliffe/Bloomberg (Getty Images)

OpenAI has formed a new safety and security committee after its existential dangers team was disbanded earlier this month.

The new oversight team will be led by board chair Bret Taylor, directors Adam D’Angelo and Nicole Seligman, and OpenAI’s chief executive Sam Altman, the artificial intelligence company announced Tuesday. The committee’s first task will be to evaluate and develop OpenAI’s processes and safeguards over the next 90 days, the company said.

Advertisement

At the end of the three-month period, the committee will share its recommendations with the full board and, following review, the company will publicly share an update on adopted recommendations.

Advertisement

The company also confirmed that it has begun training its “next frontier” AI model that will succeed the GPT-4 model that powers its chatbot, ChatGPT, and will help it advance towards artificial general intelligence (AGI).

Advertisement

The safety committee will also include OpenAI technical and policy experts, including head of preparedness Aleksander Madry, head of safety systems Lilian Weng, head of alignment science John Schulman, head of security Matt Knight, and chief scientist Jakub Pachocki.

This comes after the company shuttered the team responsible for AI’s existential dangers after the two co-leads of the company’s so-called “superalignment” team — co-founder and chief scientist Ilya Sutskever and Jan Leike — resigned on May 14.

Advertisement

Upon his departure, Leike wrote in a thread on X that he had reached a “breaking point” over disagreements with the company’s core priorities, adding that “safety culture and processes have taken a backseat to shiny products” in the last few years.

Sutskever — who helped briefly oust Altman from OpenAI’s leadership late last year — and Leike joined the ranks of several other OpenAI employees who have exited the company in recent months, including others from the superalignment team and researchers working on AI policy and governance who have felt that the company has strayed from its original mission.

Advertisement

In response to the high-profile departures, Altman and OpenAI President Greg Brockman said in a co-signed post on X that they are continuing to lay the groundwork for deploying increasingly capable AI models in a safe way. “We need to keep elevating our safety work to match the stakes of each new model,” they said.

That includes a “very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities,” the pair wrote.