Boss moves at OpenAI

A week is a long time in AI startup politics
Boss moves at OpenAI
Image: Quartz/Getty/Andrew Caballero-Reynolds

Hi Quartz members!

After it became clear that Sam Altman would return on Tuesday (Nov. 21) as the CEO of OpenAI, the company’s employees threw a party at their San Francisco headquarters. At some point, a fire alarm went off, and two fire trucks showed up, ready to meet the inferno. The staff poured out of the building, into the courtyard. It turned out that a smoke machine—handily procured for the party—had triggered the alarm. The fire trucks left. The employees went back inside. The party continued.

As metaphors go, this was an excellent one for the kind of week OpenAI has had: a brief spell of panic as Altman was fired by his board; the arrival of Microsoft, as a first responder, offering Altman a research lab of his own; the employees’ threats to leave OpenAI and follow Altman; and the eventual replacement of the board and the reinstatement of Altman. Just a rogue smoke machine, folks—move along now, nothing to see here.

But let’s tarry a little, and ask a question that is danger of being forgotten. Why did the smoke alarm go off? Why was Altman even fired?

In its statement, the OpenAI board last week complained that Altman “was not consistently candid in his communications with the board,” which seems like a roundabout way of saying that he misrepresented something, if not multiple somethings. But the board offered no clarity on what this was all about. It wasn’t just the public that was kept in the dark. Emmett Shear, who was briefly CEO during Altman’s absence, couldn’t find any written record of why the board dismissed Altman. The Wall Street Journal reported that during an all-hands meeting after Altman was fired, employees asked whether they’d ever know the reason. “No,” a board member replied.

Enter speculation.


ONE BIG NUMBER

747: The number of OpenAI employees, out of the total headcount of 770, who threatened to resign unless Altman was brought back as CEO under a new board. Which makes one wonder: What happens to the 23 employees who didn’t support Altman’s cause, now that he has returned?


FOLLOW THE LEADER

In the dribbles and rivulets of information that have leaked since the coup-that-wasn’t, a few hypotheses emerge:

  • There was no single inciting incident that led to Altman’s firing, per some accounts. Rather, Wall Street Journal sources described it as a slow erosion of trust. “Also complicating matters were Altman’s mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI’s technology or intellectual property could be used,” the Journal noted. (Altman was, for instance, trying to raise billions in funds for a new AI chip venture, code-named Tigris, as well as for another AI hardware project.)
  • Philosophical differences had simmered between the board and its CEO for a while. Altman is an AI accelerationist, believing that the development of AI must happen at a fast clip. On the board, though, were decelerationists: people worried about AI’s numerous safety issues, and wishing to moderate the pace of progress while building guardrails into its use. Altman, naturally, would have regarded decelerationists as obstacles to any full-blown commercialization of AI. The board, for its part, seemed to be so genuinely worried about safety, a professor of ethics told the Guardian, that it was “willing to burn the company down.”
  • Or maybe there was a single inciting incident? A Reuters report reveals the presence of a letter, written by several staff researchers to the board, about an internal project called Q*. According to Reuters’ sources, the new algorithm involved an artificial generative intelligence so powerful that it could pose severe threats to humanity.

The argument could be made that the shenanigans at OpenAI should engage us only at the level of soap opera. OpenAI is not a company that touches the lives of everyone, or even most people, and the promise of its technology is as yet unproven. But the question of AI safety is a burning one. (In fact, OpenAI’s tortured nonprofit-cum-subsidiary corporate structure comes from its original aim to prioritize profit.) It will eventually matter to all of us who wins the tussle between accelerationists and decelerationists. For those keeping score at home, with the return of Altman and the dissolution of board that fired him, we have: accelerationists 1, decelerationists 0.


QUOTABLE

“Most of the CEO job (and the majority of most executive jobs) are very automatable. There are of course the occasional key decisions you can’t replace.”

— Emmett Shear, who served as interim CEO of OpenAI for a matter of days before Altman’s return, posting on X just days before he was appointed to the role


ONE 🪑 THING

When the dust settled somewhat, as of Wednesday (Nov. 22), OpenAI had a new board comprised of three wealthy white men. One of them, the former US Treasury secretary Larry Summers, called ChatGPT “the most important general purpose technology since the wheel or fire,” and warned that it was “coming for the cognitive class” and their jobs. The second, Adam D’Angelo, is the CEO of Quora, the question-and-answer platform that launched its own ChatGPT-driven bot, named Poe, last December. D’Angelo was on the old board as well.

Which leaves the chair, Bret Taylor, co-creator of Google Maps and Facebook’s “Like” button, and the former co-CEO of Salesforce. Taylor had announced his own AI startup back in February, but it’s difficult to think of his role on the board as anything other than that of an enforcer, a man who Lays Down The Rules. Last year, as chair of the Twitter board, he led negotiations with Elon Musk, who had offered $44 billion for the company. Then, when Musk tried to back out, Taylor successfully sued him to push the contract through. It may not have worked out very well for Twitter’s users, but it was a windfall for shareholders.

This new board is a first draft; its function, among others, is to find more members for itself. Taylor, whom the tech journalist Kara Swisher described as “a very pleasant and anodyne fella,” is well-connected in the Valley. (He’s also a “calmer guy in a sea of frantic techies,” she wrote.) But he’ll need the enforcer part of his character to install a proper board. One that isn’t too tiny or comprised of people with no financial stake in OpenAI, as the former board was. One that is able to weigh in with broader, deeper, and more diverse experience on the big decisions. After all, if ChatGPT is like the wheel or fire, there may be no bigger, more consequential decisions in tech than the development of AI.


Thanks for reading! Sunday Reads is taking tomorrow off but will be back next weekend. Meanwhile, don’t hesitate to reach out to us with comments, questions, or topics you want to know more about.

Have a weekend free of internecine power struggles!

—Samanth Subramanian, Weekend Brief editor