OpenAI's former chief scientist is launching a new AI startup focused on safe 'superintelligence'

Ilya Sutskever used to lead OpenAI's team focused on curbing the dangers of advanced AI

We may earn a commission from links on this page.
Ilya Sutskever left OpenAI in May, where he co-lead the company’s “super alignment” team.
Ilya Sutskever left OpenAI in May, where he co-lead the company’s “super alignment” team.
Photo: Jack Guez/AFP (Getty Images)
In This Story

After almost a decade overseeing superintelligence as chief scientist of artificial intelligence startup OpenAI, Ilya Sutskever left in May. One month later, he’s started his own AI company — Safe Superintelligence Inc.

“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever wrote Wednesday on X, formerly Twitter. “We will do it through revolutionary breakthroughs produced by a small cracked team.”

Advertisement

Sutskever is joined in his new venture by Daniel Gross, who formerly directed Apple’s AI efforts, and Daniel Levy, another ex-OpenAi researcher. The startup has offices in Tel Aviv and Palo Alto, California.

Advertisement

Alongside Jan Leike — who also left OpenAI in May and now works at Anthropic, an AI firm started by former OpenAI employees — Sutskever led OpenAI’s Superalignment team. The team was focused on controlling AI systems and ensuring that advanced AI wouldn’t pose a danger to humanity. It was dissolved shortly after both leaders departed.

Advertisement

Safe Intelligence — as implied from its name — will be focusing on similar safety efforts to what Sutskever’s old team did.

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the company’s co-founders wrote in a public letter. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Advertisement

The firm is also promising that its “singular focus” will prevent management interference or product cycles from getting in the way of their work. Several members of OpenAI, including founding member Andrej Karpathy, have left the company in recent months. Several former staffers last month signed an open letter raising the alarm over “serious risks” at OpenAI over oversight and transparency issues.

Sutskever was one of OpenAI’s board members who attempted to oust fellow co-founder and CEO Sam Altman in November, who was quickly reinstated. The directors had criticisms over Altman’s handling of AI safety and allegations of abusive behavior. Former board member Helen Toner has said Altman’s manipulative behavior and lies had created a toxic culture that executives labeled “toxic abuse.”