OpenAI didn't keep promises made to its AI safety team, report says

OpenAI didn't fulfill its commitment to provide 20% of computing power to its Superalignment team, sources told Fortune

We may earn a commission from links on this page.
Sam Altman (L) speaking with Ilya Sutskever seated on the right
OpenAI chief executive Sam Altman and chief scientist Ilya Sutskever speak at Tel Aviv University in Tel Aviv on June 5, 2023.
Photo: JACK GUEZ / AFP (Getty Images)

Last July, OpenAI announced its “Superalignment” team, a unit dedicated to controlling artificial intelligence’s existential dangers — an effort the company said it was dedicating 20% of its computing power to over the next four years. But the company reportedly did not keep its commitments.

According to reports, Superalignment team’s requests for access to GPUs (or graphics processing units, which are used in training and running AI models) were repeatedly turned down by OpenAI leadership. The team’s promised computing power budget also never came close to reaching 20%, Fortune reported, citing unnamed people familiar with the team. OpenAI disbanded the team last week after its co-leads Ilya Sutskever and Jan Leike resigned from the company. OpenAI did not immediately respond to a request for comment.

Advertisement
Advertisement

Sources told Fortune the Superalignment team was never allocated the promised computing power, and that it received much less of the regular computing budget from the company. One person said OpenAI did not establish clear conditions on how the promised 20% of compute power was to be allocated, such as whether it would be “20% each year for four years” or “5% a year for four years.”

Advertisement

Meanwhile, the Superalignment team was repeatedly informed by Bob McGrew, vice president of research at OpenAI, that its requests for additional GPUs beyond what was already budgeted were declined. But other OpenAI leaders — including chief technology officer Mira Murati — were involved in rejecting the requests, sources told Fortune.

That shifting investment didn’t bode well for the safety team. Without its allocated computing power, the Superalignment team was unable to pursue more ambitious research, a person told Fortune. Sources also backed up Leike’s account of the team’s experience which he shared on X after announcing his resignation from the company. Leike wrote that “safety culture and processes have taken a backseat to shiny products,” and that the team was sometimes “struggling for compute and it was getting harder and harder to get this crucial research done.”

Advertisement

Some sources told Fortune the team’s access to computing power worsened after OpenAI chief executive Sam Altman was temporarily ousted in November — an effort led by Sutskever, who was a co-founder and chief scientist at the company. However, one person said the problems pre-dated the ousting. Sutskever did not have access to the Superalignment team after Thanksgiving, sources said.

Sutskever and Leike joined a list of other OpenAI employees who have departed the company recently, including others from the Superalignment team and researchers working on AI policy and governance.