Matt Levine, in response to:
If you lie to board members about other board members in an attempt to gain control over the board, I assert that the board should fire you, pretty much no matter what
No! Wrong! Not no matter what! In a normal company with good governance, absolutely. Lying to the board is the main bad thing that the CEO can do, from a certain perspective. But there are definitely some companies — Elon Musk runs like eight of them, but also OpenAI — where, if you lie to board members about other board members in an attempt to gain control over the board, the board members you lie about should probably say “I’m sure that deep down this is our fault, we’re sorry we made you lie about us, we’ll see ourselves out.”
To be clear, I am very sympathetic to the OpenAI board’s confusion. This was not a simple dumb mistake. They did not think “we are the normal board of a normal public company, and we have to supervise our CEO to make sure that he pursues shareholder value effectively.” This was a much weirder and more reasonable mistake. They thought “we are the board of a nonprofit set up to pursue the difficult and risky mission of achieving artificial general intelligence for the benefit of humanity, and we have to supervise our CEO to make sure he does that.” Lying to the board seems quite bad as a matter of, you know, AI misalignment.
Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment. The battle was fought largely through control of the public narrative, and the above was my attempt to put together what happened.
My conclusion, which I still believe, was that Sam Altman had engaged in a variety of unacceptable conduct that merited his firing.
In particular, he very much ‘not been consistently candid’ with the board on several important occasions. In particular, he lied to board members about what was said by other board members, with the goal of forcing out a board member he disliked. There were also other instances in which he misled and was otherwise toxic to employees, and he played fast and loose with the investment fund and other outside opportunities.
I concluded that the story that this was about ‘AI safety’ or ‘EA (effective altruism)’ or existential risk concerns, other than as Altman’s motivation to attempt to remove board members, was a false narrative largely spread by Altman’s allies and those who are determined to hate on anyone who is concerned future AI might get out of control or kill everyone, often using EA’s bad press or vibes as a point of leverage to do that.
A few weeks later, I felt that leaks confirmed the bulk the story I told at that first link, and since then I’ve had anonymous sources confirm my account was centrally true.
The Big Picture Going Forward
Thanks to Keach Hagey at the Wall Street Journal, we now have by far the most well-researched and complete piece on what happened: The Secrets and Misdirection Behind Sam Altman’s Firing From OpenAI. Most, although not all, of the important remaining questions are now definitively answered, and the story I put together has been confirmed.
The key now is to Focus Only On What Matters. What matters going forward are:
Also important is what happened afterwards, which was likely caused in large part by both the events and also way they were framed, and also Altman’s consolidated power.
In particular, Sam Altman and OpenAI, whose explicit mission is building AGI and who plan to do so within Trump’s second term, started increasingly talking and acting like AGI was No Big Deal, except for the amazing particular benefits.
Their statements don’t feel the AGI. They no longer tell us our lives will change that much. It is not important, they do not even bother to tell us, to protect against key downside risks of building machines smarter and more capable than humans – such as the risk that those machines effectively take over, or perhaps end up killing everyone.
And if you disagreed with that, or opposed Sam Altman? You were shown the door.
Beware being distracted by the shiny. In particular:
This is all quite bad, but things could be far worse. OpenAI still has many excellent people working on alignment, security and safety. I They have put out a number of strong documents. By that standard, and in terms of how responsibly they have actually handled their releases, OpenAI has outperformed many other industry actors, although less responsible than Anthropic. Companies like DeepSeek, Meta and xAI, and at times Google, work hard to make OpenAI look good on these fronts.
Now, on to what we learned this week.
Hagey Verifies Out the Story
Hagey’s story paints a clear picture of what actually happened.
It is especially clear about why this happened. The firing wasn’t about EA, ‘the safety people’ or existential risk. What was this about?
Altman repeatedly lied to, misled and mistreated employees of OpenAI. Altman repeatedly lied about and withheld factual and importantly material matters, including directly to the board. There was a large litany of complaints.
The big new fact is that the board was counting on Murati’s support. But partly because of this, they felt they couldn’t disclose that their information came largely from Murati. That doesn’t explain why they couldn’t say this to Murati herself.
If the facts asserted in the WSJ article are true, I would say that any responsible board would have voted for Altman’s removal. As OpenAI’s products got more impactful, and the stakes got higher, Altman’s behaviors left no choice.
Claude agreed, this was one shot, I pasted in the full article and asked:
I agree, on both counts. There are only two choices here, at least one must be true:
That doesn’t excuse the board’s botched execution, especially its failure to disclose information in a timely manner.
Key Facts From the Story
The key facts cited here are:
Murati was clearly a key source for many of these firing offenses (and presumably for this article, given its content and timing, although I don’t know anything nonpublic). Despite this, even after Altman was fired, the board didn’t even tell Murati why they had fired him while asking her to become interim CEO, and in general stayed quiet largely (in this post’s narrative) to protect Murati. But then, largely because of the board’s communication failures, Murati turned on the board and the employees backed Altman.
Dangers of False Narratives
This section reiterates and expands on my warnings above.
The important narrative here is that Altman engaged in various shenanigans and made various unforced errors that together rightfully got him fired. But the board botched the execution, and Altman was willing to burn down OpenAI in response and the board wasn’t. Thus, Altman got power back and did an ideological purge.
The first key distracting narrative, the one I’m seeing many fall into, is to treat this primarily as a story about board incompetence. Look at those losers, who lost, because they were stupid losers in over their heads with no business playing at this level. Many people seem to think the ‘real story’ is that a now defunct group of people were bad at corporate politics and should get mocked.
Yes, that group was bad at corporate politics. We should update on that, and be sure that the next time we have to Do Corporate Politics we don’t act like that, and especially that we explain why we we doing things. But the group that dropped this ball is defunct, whereas Altman is still CEO. And this is not a sporting event.
The board is now irrelevant. Altman isn’t. What matters is the behavior of Altman, and what he did to earn getting fired. Don’t be distracted by the shiny.
A second key narrative spun by Altman’s allies is that Altman is an excellent player of corporate politics. He has certainly pulled off some rather impressive (and some would say nasty) tricks. But the picture painted here is rife with unforced errors. Altman won because the opposition played badly, not because he played so well.
Most importantly, as I noted at the time, the board started out with nine members, five of whom at the time were loyal to Altman even if you don’t count Ilya Sutskever. Altman could easily have used this opportunity to elect new loyal board members. Instead, he allowed three of his allies to leave the board without replacement, leading to the deadlock of control, which then led to the power struggle. Given Altman knows so many well-qualified allies, this seems like a truly epic level of incompetence to me.
The third other key narrative is the one Altman’s allies have centrally told since day one, which is entirely false, is that this firing (which they misleadingly call a ‘coup’) was ‘the safety people’ or ‘the EAs’ trying to ‘destroy’ OpenAI.
My worry is that many will see that this false framing is presented early in the post, and not read far enough to realize the post is pointing out that the framing is entirely false. Thus, many or even most readers might get exactly the wrong idea.
In particular, this piece opens with an irrelevant story ecoching this false narrative. Peter Thiel is at dinner telling his friend Sam Altman a frankly false and paranoid story about Effective Altruism and Eliezer Yudkowsky.
Thiel says that ‘half the company believes this stuff’ (if only!) and that ‘the EAs’ had ‘taken over’ OpenAI (if only again!), and predicting that ‘the safety people,’ who on various occasions Thiel has described as literally and at length as the biblical Antichrist would ‘destroy’ OpenAI (whereas, instead, the board in the end fell on its sword to prevent Altman and his allies from destroying OpenAI).
And it gets presented in ways like this:
We are told to focus on the nice people eating dinner while other dastardly people held ‘secret video meetings.’ How is this what is important here?
Then if you keep reading, Hagey makes it clear: The board’s firing of Altman had nothing to do with that. And we get on with the actual excellent article.
I don’t doubt Thiel told that to Altman, and I find it likely Thiel even believed it. The thing is, it isn’t true, and it’s rather important that people know it isn’t true.
A Full Reference and Reading List
If you want to read more about what has happened at OpenAI, I have covered this extensively, and my posts contain links to the best primary and other secondary sources I could find. Here are the posts in this sequence.
The write-ups will doubtless continue, as this is one of the most important companies in the world.