The WSJ article says the following:
The increasing amount of time Altman spent at OpenAI riled longtime partners at Y Combinator, who began losing faith in him as a leader. The firm’s leaders asked him to resign, and he left as president in March 2019.
Graham said it was his wife’s doing. “If anyone ‘fired’ Sam, it was Jessica, not me,” he said. “But it would be wrong to use the word ‘fired’ because he agreed immediately.”
I don't think it's fair to say that claim 5 was knowably, obviously false at the time it was made, based on this. The above two paragraphs really sound like "Sam Altman was fired from YCombinator". Now, it's possible that the journalist who wrote this was engaging in selective quotation and the non-quoted sections are deliberately misleading. This is compatible with PG's recent clarification on Twitter. But I think it'd be stranger to read those two paragraphs and then believe that he wasn't fired, than to believe that he was fired. In isolation, PG's rejection of the word "fired" because "he agreed immediately" is nonsensical. Agreeing to be fired is still being fired.
I still have substantial uncertainty about what happened here. "The firm’s leaders asked him to resign" is a pretty straightforward claim about reality written in the journalist's voice, and I would be somewhat surprised if the journalist knew that Paul & Jessica had (claimed) to have presented Sam with the "choose one" option and decided to describe that as "asked him to resign". That's less "trying to give people a misleading impression" and more "lying about an obvious matter of fact".
In isolation, PG's rejection of the word "fired" because "he agreed immediately" is nonsensical. Agreeing to be fired is still being fired.
It is nonsensical to read it as not being fired even with pg's logic-chopping "clarification". They issued an ultimatum: step down from OA or be fired from YC CEO. He did not step down. Then he was fired from YC CEO. (And pulled shenanigans on the way out with the 'YC Chair' and 'advisor' business, further emphasizing that it was a firing.)
Helen Toner was recently interviewed on the TED AI Show. In the first segment, she explains why the OpenAI board decided to fire Sam Altman (video, transcript).
What should we make of Helen's account?
In this post, I'll go through the interview and examine each of the claims made.
For the tl;dr, skip straight to the "reviewing the claims" section.
Claim 1. Altman withheld information, misrepresented things, and in some cases “outright lied” to the board.
After a bit of setup, the interview begins as follows:
Helen then lists five examples. Let's take them one-by-one.
1.1 The board was not informed in advance about ChatGPT
Remarking on Toner's comments in a subsequent interview, Altman appears to accept this claim. He explains his action by suggesting that the ChatGPT release was not ex ante above the bar to report to the board:
GPT-3.5 was indeed available via the API from March 2022.
It has previously been reported that the ChatGPT release was not expected to be a big deal. For example, The Atlantic:
Similarly, commenting on Toner's interview, Andrew Mayne remarked:
So: it is not clear that “the board found out on Twitter” implies misconduct on Altman's part.
The intuitive force of this claim comes from the explosive popularity of ChatGPT. That's a bit unfair on Sam, as everyone agrees it was unexpected.
To make this a clear example of misconduct, we'd need evidence that the board set clear expectations which Sam then broke.
My takeaway: This claim is true. But, it's not clearly a big deal. It would be a big deal if it violated a specific rule or expectation set by the board, but the existence of something like that has not been made public.
1.2 Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he “constantly” claimed to have no financial interest in the company
Sam is widely known to be an active VC investor, so everyone knows that he has a bunch of indirect financial interests associated with running OpenAI. Presumably, the claim here is about direct financial interests.
Altman has no equity in OpenAI, and has often mentioned this in interviews. A typical example, reported by Fortune:
Has Sam ever publicly said he has no financial interest in the company? I haven't found an example on Google. Perplexity, GPT-4o and Claude 3 Opus could not find an example either.
Has Sam ever strongly suggested that he has no financial interest in the company? I've seen some claims he did this at a Senate hearing, so here's the transcript (and video):
Does this look like a deliberate attempt to mislead the Senate about his financial interests in OpenAI?
A charitable read:
An uncharitable read:
My take: equivocal.
Can anyone find more compelling examples of Sam directly saying, or deliberately suggesting, that he has no financial interest in OpenAI?
If we can't, then it seems like Toner's claim that Altman "constantly was claiming to be an independent board member with no financial interest in the company" is not a fair representation of his public statements.
Sam may, however, have been saying different things to the board in private. Perhaps Helen is referring to private statements. If so, ideally she would make these public, to substantiate the claim.
So far we've just been discussing what Sam did or didn't say about his financial interests in OpenAI.
Next: did Sam have direct financial interests in OpenAI?
OpenAI claims that while Sam owned the OpenAI Startup Fund, there was “no personal investment or financial interest from Sam”.
Huh?
Well, in February 2024, OpenAI said: “We wanted to get started quickly and the easiest way to do that due to our structure was to put it in Sam's name. We have always intended for this to be temporary.” In April 2024 it was announced that Sam no longer owns the fund.
If we assume that OpenAI's story is true, we might nonetheless expect Sam to have flagged this situation to the board. The charitable interpretation is: his failure to do so was a mistake. The uncharitable interpretation is: this is an example of Sam's tendency to negligently or deliberately withhold information from the board.
Might OpenAI's story be false? "They would say that", right?
Well—maybe. I'd guess there are internal documents (e.g. emails) that'd clearly support or contradict OpenAI's statement. The statement was issued in February 2024, during an ongoing SEC investigation. So, it'd be quite a big risk to lie here.
My takeaway: Sam has not directly claimed he has no financial interest in the company, at least in public. OpenAI claims that his ownership of the Startup Fund did not entail financial interest, anyway. Overall: shrug.
1.3 Sam gave inaccurate information about formal safety processes
This is consistent with the board's "not consistently candid" statement. No further detail, or supporting non-testimonial evidence, has been provided.
My takeaway: Could be a big deal, but we've no details or non-testimonial evidence.
1.4 Sam lied to other board members while trying to remove Helen from the board
What lie(s) is Helen referring to? She does not specify, so let's assume she's talking about the following incident, as reported by the New York Times:
The incident was also reported in Helen's December 2023 interview with the Wall Street Journal:
And also in the New Yorker:
So, the claim is: Sam lied to OpenAI board members to try to get Helen Toner removed from the board. Specifically, Sam told several board members that Tasha McCauley wanted Helen Toner removed from the board, and he knew this was untrue.
(Even more specifically: the WSJ says that Sam "left a misleading perception", while the New Yorker says that Sam "misrepresented" the situation. This is more ambiguous than alleging an "outright lie", but here I'm going to summarise the claim of all three accounts as "Sam lied".)
What evidence do we have to support this claim? In the quote above, the NYT cites "people with knowledge of the conversations". The WSJ cites "people familiar with the situation" and the New Yorker quotes “a person familiar with the board's discussions”.
So: we know that two or more people have anonymously given this account of events to a journalist.
Is it possible that the anonymous accounts come from just two people, and those people are Helen Toner and Tasha McCauley? Yes[1]. Is it likely? Dunno.
We can at least say: Helen is the only non-anonymous source who has said that Sam lied to the board while trying to get her removed from the board[2].
(Reminder: the four board members who signed the November statement stated that Sam was "not consistently candid" with the board, giving no further detail.)
Do we have any non-testimonial evidence (e.g. documentary evidence) to support this claim? In short: no.
Notable: the WSJ and New Yorker reports mention that the board members compared notes. So far, these notes have not been made public.
To sum up: we have testimony of Helen Toner and at least one other anonymous source. We don't have supporting non-testimonial evidence.
What is Altman's story? Sam responded to Toner in an interview on May 29th:
Presumably, Altman wants us to infer that he denies lying to the board.
Do we have evidence to support his denial?
Well, we know that the arguments made by the November board were not sufficient to convince other key stakeholders that Sam should go. What should we make of this?
Probably the most powerful stakeholder was Satya Nadella, who has an enormous financial interest in OpenAI. One might think that if Sam had been caught lying to the board, Nadella would not want to work with him. In fact, Nadella strongly supported Sam—offering Sam and the entire OpenAI team jobs at Microsoft in case OpenAI collapsed.
On the other hand, one might think that Nadella saw evidence of Sam lying to the board, but nevertheless decided that his interests were best served by keeping Sam as CEO.
Either scenario seems possible.
Shortly after the November weekend, OpenAI formed a "Special Committee" to investigate the events. In the words of Bret Taylor and Larry Summers:
OpenAI's March 2024 summary of the WilmerHale report reads as follows:
So, according to WilmerHale, Altman's conduct "did not mandate removal". What does that mean, exactly?
In this context, "mandate" probably means "legally mandate". If Sam had been found to have lied in the way that's alleged, would that legally mandate his removal? After several conversations with ChatGPT, my IANAL conclusion is: maybe, maybe not. So: the "his conduct did not mandate removal" statement doesn't help me settle claim (1.4). Perhaps an expert in these kinds of thing could read more into it.
Some people read "it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman" as a euphemism for "Sam Altman lied". But these words do not specify the reason for the loss of trust, nor whether it was justified.
Some people read the absence of an evaluative judgement from WilmerHale (e.g. "the loss of trust was / was not justified", or "his conduct did not warrant removal") as telling. My impression (shared by ChatGPT) is that law firm investigations usually just report facts and legal judgements, unless the client explicitly requests otherwise. Typically, the non-legal judgements are left to the client.
The full WilmerHale report was not made public. Is that suspicious? In The Economist, Toner and McCauley suggest that it is:
My impression is that internal investigations are typically not released in public, and that the OpenAI summary was typical in its level of detail.
There's an irony to Toner and McCauley's criticism—the November board's communications were also criticised for lacking detail and failing to justify their actions.
How credible is the WilmerHale report? Did Altman—and/or other stakeholders with an interest in keeping Sam as CEO—have their thumb on the scale?
I've not found much to go on here. WilmerHale were appointed by the "Special Committee", namely Larry Summers and Bret Talyor:
WilmerHale appear to be a reputable law firm, although their actual name is "Wilmer Cutler Pickering Hale and Dorr", which is pretty ridiculous.
But yeah—does outside view say that these things are usually a stitch up? I don't know. I briefly searched for stats on the fraction of "independent investigations" that lead to CEOs getting fired, but couldn't find anything useful.
One might also wonder: can we trust OpenAI's March 2024 board to write a honest summary of the WilmerHale report? "Honest" in the sense of "no literal falsehoods"—my guess is "yes". "Honest" in the sense of "not deliberately misleading"—no. We should expect the March 2024 board to craft their summary of the WilmerHale report according to their own aims (much as any board would do).
So, what might have been omitted from the summary? If the WilmerHale report documented behaviour from Sam that the new board thought egregious, then they would have fired Sam. So we need to constrain our speculation to things which are bad, but not bad enough to undermine the board's support for Sam.
Who was on the OpenAI board when the conclusion of the WilmerHale investigation was announced?
So: Bret Taylor and Larry Summers read the report and concluded that Sam is the right CEO for OpenAI. Adam D'Angelo may or may not have agreed (he'd have lost 2-1 on a vote).
On the same day, Sam Altman rejoined the board, and the following new board members were added:
Presumably the three new arrivals also read the WilmerHale report. So we have at least five people who read the report and concluded that Sam is the right CEO for OpenAI. Probably we should count Satya Nadella as a sixth, even though Microsoft has an observer-only role.
So let's recap. The claim at stake is: Sam lied to OpenAI board members to try to get Helen Toner removed from the board. Specifically, Sam told several board members that Tasha McCauley wanted Helen Toner removed from the board, and he knew this was untrue.
The claim is asserted by Helen Toner and at least one other anonymous source. We don't have non-testimonial evidence to support the claim.
The claim is indirectly denied—or accepted yet seen as insufficiently damning—by power players in the November shenanigans (e.g. Satya Nadella), WilmerHale ("conduct did not mandate removal"), and at least five of the current board members. It's also indirectly denied by Sam ("very significantly disagree with her recollection").
My takeaway: Equivocal. It's a big deal if true, but the evidence is far from decisive. If anything, the balance of public evidence suggests that Sam did not make an egregious attempt to mislead the board.
1.5 There were more examples
The board members who voted to fire Sam Altman are: Ilya Sutskever, Adam D'Angelo, Tasha McCauley and Helen Toner.
Here, Helen claims that all four of these people "came to the conclusion that [they] just couldn't believe the things that Sam was telling us".
This claim is consistent with the board's original statement that Sam was not "consistently candid". It is consistent with the WilmerHale report ("breakdown of trust"). And Helen's account has not been disputed by any of the other three board members.
My takeaway: Shrug. I believe that the board members reached this conclusion—the question is whether it was justified.
Claim 2. Two executives said that Sam should not be CEO
They used the phrase "psychological abuse," telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues. They've since tried to minimize what they told us, but these were not casual conversations. They're really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations.
Several reports have suggested that the two executives were Mira Murati and Ilya Sutskever. Let's assume that is correct. Here's the New York Times:
So: Murati (via her lawyer) denies that she "approached the board in an effort to get Mr Altman fired". She also denies that she supported the board's decision to fire Sam. She confirms that she gave the board feedback about Sam, but notes that this was "all feedback Sam already knew".
Should we believe Murati? Well—it appears that her decisions were important for Sam's reinstatement as CEO. At one point, while Murati was interim CEO, anonymous sources told Bloomberg that she was planning to rehire Altman. Since November, Mira has continued in her role as CTO. She also remains a prominent figure (e.g. she hosted the GPT-4o demo). It seems unlikely, but not inconceivable, that this would happen if she had tried to get Sam fired.
Sutskever (via his lawyer) also denies that he approached the board (in the same NYT article, quoted above). He does not deny that he shared concerns about Sam to the board. And, of course, he voted for Sam's removal.
Sutskever's role in the November weekend—and his thoughts on what happened—remain unclear. He voted for Sam's removal, but on November 20th tweeted:
Then, on December 5th, he tweeted:
The tweet was deleted a few hours later.
Sutskever then disappeared from the public eye. He made no further public statements until May 15th, when he announced his resignation:
The announcement was followed by a picture of him with his arms around Sam Altman and Greg Brockman.
This is all a bit weird.
Actually it's so weird, and so hard to read, that I'm just going to shrug and move on.
To wrap this section, let's get back to Helen's remarks. I'll quote them again here:
There's nothing from Sutskever to contradict Helen's account. However, Mira's account does appear to contradict Helen's: Mira claims she didn't want Sam fired, while Helen claims that both execs told her "they didn't think [Sam] was the right person to lead the company to AGI". Either Helen or Mira is not being candid here, or Helen is not referring to a conversation with Mira, but with some other executive.
If Helen wanted to defend her account, she could release the "screenshots and documentation" provided by the execs, and any other meeting notes she took. Mira could do something similar. Of course, both of them may be constrained by legal or moral obligations. For now, we just have their testimony.
My takeaway: He said, she said. Overall, I'm equivocal.
Sutskever's behaviour worries me though. A man under intense pressure, for sure. A victim of Altman's "psychological abuse"? Maybe…?
I've focussed on Helen's claim that two execs wanted Sam gone. I've ignored her claims that they reported "they couldn't trust him", a "toxic atmosphere" and "psychological abuse". The main reason is that I don't think we should update much on general claims of this kind. I explain why below.
Aside: how should we update on reports that staff are "scared of their CEO", that a CEO "creates a toxic culture", that a CEO "can't be trusted", etc?
Are OpenAI staff unusually afraid of Sam? Does Sam create an unusually toxic culture of fear, manipulation and lies?
My guess is that, for roughly all companies on the S&P 500, a journalist could easily find 5+ employees willing to anonymously testify that they are scared of their CEO, that CEO is manipulative, creates a "toxic culture", and so on. I think we should be basically unmoved by general claims like this. The things we should take seriously are (a) specific claims of misbehaviour supported by evidence, and (b) non-anonymous testimony from credible sources.
It would be great if someone wrote a post like this to review all the cases of (a) and (b). I started trying this myself, but I just don't have time. For now I'll just flag the most concerning instance of (b) that I've seen, namely this tweet by Geoffrey Irving:
Presumably, now that the NDA and non-disparagement paperwork has been relaxed, we'll see more people sharing their stories.
Claim 3. The board were afraid that Sam would undermine them if they tried to fire him
This is a plausible description of the beliefs of the four board members who decided to fire Sam.
It fits with many reported details of the November events, for example the fact that Altman has no advance warning, that Satya Nadella was not informed in advance, and that Murati was only informed the evening before.
Events proved that the board were justified in their belief. Is this damning for Sam? Do Sam's actions contradict his previous statements that the board should be able to fire him?
In short: no.
If a board tries to fire the CEO, the CEO doesn't have to just say "oh ok, I'm done". There a bunch of acceptable ways in which the CEO can fight their board.
The November board had the ability to fire Sam. It didn't work out because they didn't make a persuasive case to key OpenAI staff and stakeholders.
In Sam's mind, the board was trying to fire him without good reason. So, he concludes, the problem is the board, not him. He might have been right about this.
All this is compatible with Sam sincerely believing that OpenAI should have a board that can fire him. The old board had that ability, and so does the new one. It's just an ability that is constrained by the need to avoid angering key staff and stakeholders so much that they threaten to destroy the company. That constraint on a board's power seems... normal?
The confusing thing here is that the effective power of a board to fire the CEO varies depends on the views of key stakeholders. If everyone except the board wants to keep the CEO, then the board has much less power than if everyone including the board wants the CEO gone. They'll have to talk some people around to their view.
It would be interesting to investigate what we know about what happened during the days when Sam was not CEO of OpenAI, with a view to evaluating Sam's behaviour during that period. I'm not going to do that here (except insofar as required to discuss claim 4, below).
My takeaway: yes, people normally resist attempts to fire them. The board was right to worry about this. I'd like to know more about what Sam did during the days between his firing and his reinstatement.
Claim 4. Why did OpenAI employees support Sam?
In the final part of her remarks on the November events, Toner offers her account of why OpenAI employees protested the board's decision so forcefully.
As a reminder: in response to the board's actions, some 95% of employees signed a letter threatening to quit unless Sam Altman and Greg Brockman were reinstated. The letter read:
On the face of it, the employee response makes Sam look like an unusually loved CEO, with a high level of trust and support within the company. That seems to put the views of the employees in stark opposition to those of the board.
Helen needs to explain the employee response in a different light. In particular, she needs to say that the employees were not sincerely expressing a belief that Sam Altman is the right person to lead the company to safe AGI. Or, alternatively, to concede that the employees were expressing this belief, but explain why they were wrong.
1. The situation was incorrectly portrayed to employees
Presumably Sam and his allies tried to shape the narrative to their advantage. It would be weird if they didn't.
Let's assume Helen is suggesting foul play here, rather than just describing what happened. So the claim is that employees were told a misleading "you have two options" story. I'm not sure what to make of this, because the disjunct seems to have mainly been underwritten by the fact that so many employees threatened to quit.
Were employees mislead in order to create support for this disjunct? For example, they might have been told that whether or not they threaten quit, major stakeholders would pull the plug on a "no Sam OpenAI" regardless. If a claim like that were made—and it were untrue—we'd have grounds for a charge of foul play. Helen doesn't specify, and I've not seen public evidence on this, so I'll just move on.
My take: Shrug.
2. Employees were acting out of sentiment, self-interest and fear
So: many employees were motivated by sentiment ("they loved their team", "they cared about the work they were doing") and/or self-interest ("about to make a lot of money", "they didn't want to lose their job") and/or fear ("they were really afraid of what might happen to them").
Sounds plausible! Humans gonna human. The key claim here is that the employees support for Sam was so dictated by these motives that it gives us little or no signal about the merits of the board's decision (by the lights of the board's mission to ensure that OpenAI creates AI that benefits all of humanity). This is a strong claim, and Helen does not establish it.
My take: Surely true in part. Employee support for Sam is only modest evidence that the board's decision was wrong.
Aside: my friends are persuasive, my enemies are manipulative
The boundary between persuasion and manipulation is blurry. When we're mad at someone, we're much more likely to interpret an attempt at persuasion as an attempt at manipulation (thanks to Kat Woods for this point).
Did Sam get himself reinstated via ethical persuasion or unethical manipulation? Have you seriously considered the first possibility?
Claim 5. Sam was fired from Y Combinator
Helen seems to be referring to an article in the Washington Post, which was published November 22 2023. The article cites three anonymous sources claiming that Altman was "asked to leave". The article was titled "Sam Altman’s been fired before. The polarizing past of OpenAI’s reinstated CEO."
Helen seems to have missed The Wall Street Journal's article, published December 26 2023, in which Paul Graham said "it would be wrong to use the word 'fired'".
In response to Helen's claim, Paul Graham issued a clarification:
Should we trust Paul and Jessica on this? I say: "yes". If Sam was fired from YC, it's hard to see why they would want to go to bat for him. Is Sam coercing them somehow? This seems unlikely… they are rich and powerful, and both Paul and Jessica seem like the kind of people who would react very aggressively to such an attempt. Sam would need to have something very good.
Paul Graham's December statement was not prominently reported, so it's understandable that Helen could have missed it. However—a Google search for "altman y combinator fired" would have surfaced the article, so it seems like she didn't fact-check this point before the interview.
My takeaway: The claim is false. Mistakes happen, but this is a big one, in the circumstances.
Claim 6. Senior managers at Loopt asked the board to fire Altman (twice).
Helen continues:
The source for this claim appears to be a December 2023 article in the Wall Street Journal, which reports:
This article appears to be the first time that the story about events at Loopt entered the public domain.
My takeaway: I'd like to know more. But this is anonymous testimony provided more than 10 years after the fact. The most common cause of startup failure is: people falling out with each other. And I generally don't update much on claims like these. I won't read much into this until further details emerge.
Claim 7. This wasn't a problem specific to the personalities on the board
On priors, I have quite a high credence on this theory. Personality differences partly or fully explain a lot of disagreements.
One reason for insisting on hard evidence of serious misconduct is that such evidence can be persuasive to a wide range of personality types.
My takeaway: I'm at 2/5 that personality differences do explain most of the “breakdown in relationship” between Sam and the board. My credence on this is not mostly based on the fact that Sam has made this claim. Helen didn't update me much either way on this.
Reviewing the claims
Here's a review:
He said, she said. Overall, I'm equivocal.
Sutskever's behaviour worries me though. A man under intense pressure, for sure. A victim of Altman's "psychological abuse"? Maybe…?
Yes, people normally resist attempts to fire them. The board was right to worry about this.
I'd like to know more about what Sam did during the days between his firing and his reinstatement.
So, how did I update on Helen's interview?
In short: not much!
She didn't show me a smoking gun.
Claims (1.1) to (1.5) could become smoking guns if further evidence comes out. For now I'm agnostic.
Claim (2) is disputed, without evidence to settle it. Claim (3) doesn't tell me much.
The aggregate of (1.1) to (3) give me a modest "where there's smoke, there's fire" update against Sam.
Claims (4.1) and (4.2) are plausible, but I already believed something along those lines.
Claim (5) is false.
Claim (6) doesn't tell me much.
Claim (7) was not established.
Overall, the interview somewhat reduced my confidence in Helen's assessment of Sam. The main reasons are:
My modest negative update against Helen's assessment of Sam was larger than my "where there's smoke, there's fire" update against Sam. So, on net, Helen's interview gave me a small positive update on Sam.
My overall view on Sam Altman & x-risk
Arguing for my overall view on Sam would take ages, so I'll just share it for context.
From an AI x-risk perspective, I currently think that having Sam as CEO of a frontier AI lab is something like: 1/3 chance net positive, ⅓ chance neutral, ⅓ chance net negative. My error bars are wide.
My view is based almost entirely on public information. Weigh it accordingly.
A major limitation of this post is that there are very many other things you should consider in order to form your overall view on Sam.
I take a significantly dimmer view of Sam Altman than I did a month ago, partly due to Kelsey Piper's revelations about the NDAs, non-disparagement and equity threats, and partly due to revelations from Jan Leike and Leopold Aschenbrenner.
Views are my own. I have no direct financial interest in OpenAI, sadly. I own some MSFT and NVDA.
Thanks to Rob Bensinger for comments on a draft of this post. Note that he disagrees with many parts of it.
Appendix 1. Some ways I might be getting this wrong
1. Helen and the board emphasise a "pattern of behaviour", so I should not be looking for a smoking gun.
I'm sympathetic to this. But I'd be much more sympathetic if there were lots of solid evidence of medium-grade misconduct. There's some, but much of it is frustratingly ambiguous. The negative evidence we do have contributes to my ⅓ credence on Sam's leadership of a frontier lab being neutral or net negative for x-risk.
2. Many examples of OpenAI's macro-strategy over the past few years supports the boards decision to fire Sam, e.g. because the actual strategy is incongruous with the stated aims.
We can give a bunch of examples either way on this. That's a huge thing, I won't try here.
FWIW, though: I think that some people give Sam insufficient credit for his—seemingly deliberate—contribution to the huge Overton Window shift that happened during 2023.
3. This is all based on public information. Private information may paint a very different story.
Could be! Some definitely think so.
Personally, I have very little private information—that may partly or mostly explain why my views on Sam are more equivocal than others.
4. This whole post is a distraction from more important events.
My main reservation about this post is that maybe the stuff I've been looking at is relatively small potatoes. That is, there are there are far more significant public events that we can assess and update on (e.g. the NDA stuff, the 20% compute thing, etc etc etc…).
So, the worry goes, perhaps I've just written something that's basically a distraction. If you want your life back after reading this post, I apologise.
Appendix 2. Transcript of Helen Toner's TED podcast
Interviewer: Welcome to the show.
Helen Toner: Hey, good to be here.
Interviewer: So Helen, a few weeks back at TED in Vancouver, I got the short version of what happened at OpenAI last year. I'm wondering, can you give us the long version?
Toner: As a quick refresher on the context here, the OpenAI board was not a normal board. It's not a normal company. The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's public good mission was primary, was coming first over profits, investor interests, and other things. But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.
Interviewer: At this point, everyone always says, "Like what? Give me some examples."
Toner: And I can't share all the examples, but to give a sense of the kind of thing that I'm talking about, it's things like when ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter. Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change. And then a last example that I can share, because it's been very widely reported, relates to this paper that I wrote, which has been, I think, way overplayed in the press.
Interviewer: For listeners who didn't follow this in the press, Helen had co-written a research paper last fall intended for policymakers. I'm not going to get into the details, but what you need to know is that Sam Altman wasn't happy about it. It seemed like Helen's paper was critical of OpenAI and more positive about one of their competitors, Anthropic. It was also published right when the Federal Trade Commission was investigating OpenAI about the data used to build its generative AI products. Essentially, OpenAI was getting a lot of heat and scrutiny all at once.
Toner: The way that played into what happened in November is pretty simple. It had nothing to do with the substance of this paper. The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board. It was another example that really damaged our ability to trust him. It actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.
There's more individual examples. For any individual case, Sam could always come up with some kind of innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever. The end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money. Not trusting the word of the CEO who is your main conduit to the company, your main source of information about the company is just totally impossible.
Toner: That was kind of the background, the state of affairs coming into last fall. We had been working at the board level as best we could to set up better structures, processes, all that kind of thing to try and improve these issues that we had been having at the board level. Then mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him, about the toxic atmosphere he was creating. They used the phrase "psychological abuse," telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues. They've since tried to minimize what they told us, but these were not casual conversations. They're really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations. This was a huge deal. This was a lot.
Toner: We talked it all over very intensively over the course of several weeks and ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI as an organization would be to bring on a different CEO. Once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him. We were very careful, very deliberate about who we told, which was essentially almost no one in advance other than obviously our legal team. That's what took us to November 17th.
Interviewer: Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff supporting his return. What exactly happened there? Why was there so much pressure to bring him back?
Toner: Yeah, this is obviously the elephant in the room. Unfortunately, I think there's been a lot of misreporting on this. I think there were three big things going on that helped make sense of what happened here. The first is that really pretty early on, the way the situation was being portrayed to people inside the company was you have two options. Either Sam comes back immediately with no accountability, totally new board of his choosing, or the company will be destroyed. Those weren't actually the only two options, and the outcome that we eventually landed on was neither of those two options. But I get why not wanting the company to be destroyed got a lot of people to fall in line, whether because they were in some cases about to make a lot of money from this upcoming tender offer, or just because they loved their team, they didn't want to lose their job, they cared about the work they were doing. Of course, a lot of people didn't want the company to fall apart, us included.
Toner: The second thing I think it's really important to know that has really gone underreported is how scared people are to go against Sam. They had experienced him retaliating against people, retaliating against them for past instances of being critical. They were really afraid of what might happen to them. When some employees started to say, "Wait, I don't want the company to fall apart, let's bring back Sam," it was very hard for those people who had had terrible experiences to actually say that for fear that if Sam did stay in power as he ultimately did, that would make their lives miserable.
Toner: I guess the last thing I would say about this is that this actually isn't a new problem for Sam. If you look at some of the reporting that has come out since November, it's come out that he was actually fired from his previous job at Y Combinator, which was hushed up at the time. And then at his job before that, which was his only other job in Silicon Valley, his startup looped, apparently the management team went to the board there twice and asked the board to fire him for what they called deceptive and chaotic behavior. If you actually look at his track record, he doesn't exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the board as much as he would love to portray it that way.
[Interview continues on other topics.]
I wondered if I could rule this out based on the Wall Street Journal article. My thought was: it'd be weird to cite Helen anonymously as "people familiar with the situation" in an article based on an interview with Helen. I'm not familiar with journalistic norms here, but I guess an interviewee can opt to give particular statements anonymously, and these can be reported in the same article?
So far as I can tell, Tasha is not on the public record making the specific claim “Sam misrepresented my perspective to other board members”. In case you're wondering: Helen and Tasha's co-authored article in The Economist does not include the claim that Sam misrepresented Tasha's perspective.