Helen Toner was recently interviewed on the TED AI Show. In the first segment, she explains why the OpenAI board decided to fire Sam Altman (video, transcript).

What should we make of Helen's account?

In this post, I'll go through the interview and examine each of the claims made.

For the tl;dr, skip straight to the "reviewing the claims" section.


Claim 1. Altman withheld information, misrepresented things, and in some cases “outright lied” to the board.

After a bit of setup, the interview begins as follows:

Toner: For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.

Helen then lists five examples. Let's take them one-by-one.

1.1 The board was not informed in advance about ChatGPT

Toner: When ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter.

Remarking on Toner's comments in a subsequent interview, Altman appears to accept this claim. He explains his action by suggesting that the ChatGPT release was not ex ante above the bar to report to the board:

Altman: When we released ChatGPT, it was at the time called a low-key research review. We did not expect what happened to happen, but we had of course talked a lot with our board about a release plan that we were moving towards. We had at this point had GPT-3.5, which ChatGPT was based on, available for about eight months. We had long since finished training GPT-4 and we were figuring out a sort of gradual release plan for that.

GPT-3.5 was indeed available via the API from March 2022.

It has previously been reported that the ChatGPT release was not expected to be a big deal. For example, The Atlantic:

The company pressed forward and launched ChatGPT on November 30. It was such a low-key event that many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users.

Similarly, commenting on Toner's interview, Andrew Mayne remarked:

The base model for ChatGPT (GPT-3.5) had been publicly available since March 2022. ChatGPT was a much more aligned and "safe" version. I'm confused as to why this was such a big deal.

So: it is not clear that “the board found out on Twitter” implies misconduct on Altman's part.

The intuitive force of this claim comes from the explosive popularity of ChatGPT. That's a bit unfair on Sam, as everyone agrees it was unexpected.

To make this a clear example of misconduct, we'd need evidence that the board set clear expectations which Sam then broke.

My takeaway: This claim is true. But, it's not clearly a big deal. It would be a big deal if it violated a specific rule or expectation set by the board, but the existence of something like that has not been made public.

1.2 Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he “constantly” claimed to have no financial interest in the company

Toner: Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.

Sam is widely known to be an active VC investor, so everyone knows that he has a bunch of indirect financial interests associated with running OpenAI. Presumably, the claim here is about direct financial interests.

Altman has no equity in OpenAI, and has often mentioned this in interviews. A typical example, reported by Fortune:

Sam Altman said his lack of equity in OpenAI, the $27 billion company he cofounded and helms as CEO, doesn't bother him because he already has "enough money." But the 38-year old techie acknowledged that being the world's unofficial A.I. kingpin comes with plenty of other perks.

"I still get a lot of selfish benefit from this," Altman said Thursday at the Bloomberg Tech Summit in San Francisco, in response to a question about having no ownership stake in the artificial intelligence startup he helped establish. Altman said that leading OpenAI provides advantages like having "impact," having access that puts him "in the room for interesting conversations," and having an "interesting life."

“This concept of having enough money is not something that is easy to get across to other people," Altman said.

Has Sam ever publicly said he has no financial interest in the company? I haven't found an example on Google. Perplexity, GPT-4o and Claude 3 Opus could not find an example either.

Has Sam ever strongly suggested that he has no financial interest in the company? I've seen some claims he did this at a Senate hearing, so here's the transcript (and video):

Sen. John Kennedy (R-LA): Please tell me in plain English, two or three reforms or regulations, if any, that you would implement if you were queen or king for a day.

[Other witnesses respond before Sam.]

Sam Altman:

Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self-replicate and sell the exfiltrate into the wild. We can give you office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or is an in compliance with these stated safety thresholds and these percentages of performance on question X or Y.

Sen. John Kennedy (R-LA):

Would you be qualified to, to if we promulgated those rules, to administer those rules?

Sam Altman:

I love my current job.

Sen. John Kennedy (R-LA):

Cool. Are there people out there that would be qualified?

Sam Altman:

We'd be happy to send you recommendations for people out there. Yes.

Sen. John Kennedy (R-LA):

Okay. You make a lot of money, do you?

Sam Altman:

I make no… I get paid enough for health insurance. I have no equity in OpenAI.

Sen. John Kennedy (R-LA):

Really? Yeah. That's interesting. You need a lawyer.

Sam Altman:

I need a what?

Sen. John Kennedy (R-LA):

You need a lawyer or an agent.

Sam Altman:

I'm doing this cuz I love it.

Sen. John Kennedy (R-LA):

Thank you Mr. Chairman.

Does this look like a deliberate attempt to mislead the Senate about his financial interests in OpenAI?

A charitable read:

  1. The words Sam says are true. If we accept OpenAI's account of his relationship to the Startup Fund, they describe the full extent of his direct financial interests in OpenAI.
  2. The Senator's question is casual—a joke, even—riffing on the fact that Altman just declined his invitation to show interest in leaving OpenAI to become a regulator (see clip).
  3. Had Sam been asked a more formal question (e.g. "Mr Altman, please describe your financial interests in OpenAI") we could expect a more detailed answer—in particular, a mention of his indirect financial interests—but in the time-limited context of a Senator's question round, Sam's two-sentence reply seems fine.
  4. Altman is responding to an unexpected question, not raising the topic himself to make a show of it. 

An uncharitable read:

  1. Altman will have known there was a good chance that he'd be asked about his financial interest in OpenAI—he must have prepared a response.
  2. He should have said something like "I get paid enough for health insurance. I have no equity in OpenAI. But I am an active investor, so I have many indirect interests in the company."
  3. Given that his reply could have suggested no direct or indirect financial interest in OpenAI, he should have clarified this at the time, or in writing afterwards.
  4. His manner while replying to the Senator's question is a bit odd (see clip), especially the head shaking. A tell?

My take: equivocal.

Can anyone find more compelling examples of Sam directly saying, or deliberately suggesting, that he has no financial interest in OpenAI?

If we can't, then it seems like Toner's claim that Altman "constantly was claiming to be an independent board member with no financial interest in the company" is not a fair representation of his public statements.

Sam may, however, have been saying different things to the board in private. Perhaps Helen is referring to private statements. If so, ideally she would make these public, to substantiate the claim.

So far we've just been discussing what Sam did or didn't say about his financial interests in OpenAI.

Next: did Sam have direct financial interests in OpenAI? 

OpenAI claims that while Sam owned the OpenAI Startup Fund, there was “no personal investment or financial interest from Sam”. 

Huh?

Well, in February 2024, OpenAI said: “We wanted to get started quickly and the easiest way to do that due to our structure was to put it in Sam's name. We have always intended for this to be temporary.” In April 2024 it was announced that Sam no longer owns the fund.

If we assume that OpenAI's story is true, we might nonetheless expect Sam to have flagged this situation to the board. The charitable interpretation is: his failure to do so was a mistake. The uncharitable interpretation is: this is an example of Sam's tendency to negligently or deliberately withhold information from the board.

Might OpenAI's story be false? "They would say that", right?

Well—maybe. I'd guess there are internal documents (e.g. emails) that'd clearly support or contradict OpenAI's statement. The statement was issued in February 2024, during an ongoing SEC investigation. So, it'd be quite a big risk to lie here.

My takeaway: Sam has not directly claimed he has no financial interest in the company, at least in public. OpenAI claims that his ownership of the Startup Fund did not entail financial interest, anyway. Overall: shrug.

1.3 Sam gave inaccurate information about formal safety processes

Toner: On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.

This is consistent with the board's "not consistently candid" statement. No further detail, or supporting non-testimonial evidence, has been provided.

My takeaway: Could be a big deal, but we've no details or non-testimonial evidence.

1.4 Sam lied to other board members while trying to remove Helen from the board

Toner: After the paper came out, Sam started lying to other board members in order to try and push me off the board. It was another example that really damaged our ability to trust him. It actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.

What lie(s) is Helen referring to? She does not specify, so let's assume she's talking about the following incident, as reported by the New York Times:

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true [that she wanted Ms. Tonor removed], she said that was “absolutely false.”

The incident was also reported in Helen's December 2023 interview with the Wall Street Journal:

After publication, Altman confronted Toner, saying she had harmed OpenAI by criticizing the company so publicly. Then he went behind her back, people familiar with the situation said.

Altman approached other board members, trying to convince each to fire Toner. Later, some board members swapped notes on their individual discussions with Altman. The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said.

And also in the New Yorker:

Altman began approaching other board members, individually, about replacing [Toner]. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years.”

So, the claim is: Sam lied to OpenAI board members to try to get Helen Toner removed from the board. Specifically, Sam told several board members that Tasha McCauley wanted Helen Toner removed from the board, and he knew this was untrue.

(Even more specifically: the WSJ says that Sam "left a misleading perception", while the New Yorker says that Sam "misrepresented" the situation. This is more ambiguous than alleging an "outright lie", but here I'm going to summarise the claim of all three accounts as "Sam lied".)

What evidence do we have to support this claim? In the quote above, the NYT cites "people with knowledge of the conversations". The WSJ cites "people familiar with the situation" and the New Yorker quotes “a person familiar with the board's discussions”.

So: we know that two or more people have anonymously given this account of events to a journalist.

Is it possible that the anonymous accounts come from just two people, and those people are Helen Toner and Tasha McCauley? Yes[1]. Is it likely? Dunno.

We can at least say: Helen is the only non-anonymous source who has said that Sam lied to the board while trying to get her removed from the board[2].

(Reminder: the four board members who signed the November statement stated that Sam was "not consistently candid" with the board, giving no further detail.)

Do we have any non-testimonial evidence (e.g. documentary evidence) to support this claim? In short: no.

Notable: the WSJ and New Yorker reports mention that the board members compared notes. So far, these notes have not been made public.

To sum up: we have testimony of Helen Toner and at least one other anonymous source. We don't have supporting non-testimonial evidence.

What is Altman's story? Sam responded to Toner in an interview on May 29th:

I respectfully, but very significantly disagree with her recollection of events. 

Presumably, Altman wants us to infer that he denies lying to the board.

Do we have evidence to support his denial?

Well, we know that the arguments made by the November board were not sufficient to convince other key stakeholders that Sam should go. What should we make of this?

Probably the most powerful stakeholder was Satya Nadella, who has an enormous financial interest in OpenAI. One might think that if Sam had been caught lying to the board, Nadella would not want to work with him. In fact, Nadella strongly supported Sam—offering Sam and the entire OpenAI team jobs at Microsoft in case OpenAI collapsed.

On the other hand, one might think that Nadella saw evidence of Sam lying to the board, but nevertheless decided that his interests were best served by keeping Sam as CEO.

Either scenario seems possible.

Shortly after the November weekend, OpenAI formed a "Special Committee" to investigate the events. In the words of Bret Taylor and Larry Summers:

Upon being asked by the former board (including Ms Toner and Ms McCauley) to serve on the new board, the first step we took was to commission an external review of events leading up to Mr Altman’s forced resignation. We chaired a special committee set up by the board, and WilmerHale, a prestigious law firm, led the review.

OpenAI's March 2024 summary of the WilmerHale report reads as follows:

On December 8, 2023, the Special Committee retained WilmerHale to conduct a review of the events concerning the November 17, 2023 removal of Sam Altman and Greg Brockman from the OpenAI Board of Directors and Mr. Altman’s termination as CEO. WilmerHale reviewed more than 30,000 documents; conducted dozens of interviews, including of members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; and evaluated various corporate actions.  

The Special Committee provided WilmerHale with the resources and authority necessary to conduct a comprehensive review. Many OpenAI employees, as well as current and former Board members, cooperated with the review process. WilmerHale briefed the Special Committee several times on the progress and conclusions of the review.  

WilmerHale evaluated management and governance issues that had been brought to the prior Board’s attention, as well as additional issues that WilmerHale identified in the course of its review. WilmerHale found there was a breakdown in trust between the prior Board and Mr. Altman that precipitated the events of November 17.  

WilmerHale reviewed the public post issued by the prior Board on November 17 and concluded that the statement accurately recounted the prior Board’s decision and rationales. WilmerHale found that the prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. WilmerHale also found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman. WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.  

So, according to WilmerHale, Altman's conduct "did not mandate removal". What does that mean, exactly?

In this context, "mandate" probably means "legally mandate". If Sam had been found to have lied in the way that's alleged, would that legally mandate his removal? After several conversations with ChatGPT, my IANAL conclusion is: maybe, maybe not. So: the "his conduct did not mandate removal" statement doesn't help me settle claim (1.4). Perhaps an expert in these kinds of thing could read more into it.

Some people read "it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman" as a euphemism for "Sam Altman lied". But these words do not specify the reason for the loss of trust, nor whether it was justified.

Some people read the absence of an evaluative judgement from WilmerHale (e.g. "the loss of trust was / was not justified", or "his conduct did not warrant removal") as telling. My impression (shared by ChatGPT) is that law firm investigations usually just report facts and legal judgements, unless the client explicitly requests otherwise. Typically, the non-legal judgements are left to the client.

The full WilmerHale report was not made public. Is that suspicious? In The Economist, Toner and McCauley suggest that it is:

OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public.

My impression is that internal investigations are typically not released in public, and that the OpenAI summary was typical in its level of detail.

There's an irony to Toner and McCauley's criticism—the November board's communications were also criticised for lacking detail and failing to justify their actions.

How credible is the WilmerHale report? Did Altman—and/or other stakeholders with an interest in keeping Sam as CEO—have their thumb on the scale?

I've not found much to go on here. WilmerHale were appointed by the "Special Committee", namely Larry Summers and Bret Talyor:

The OpenAI Board convened a committee consisting of Bret Taylor and Larry Summers to oversee the review of recent events. The committee interviewed several leading law firms to conduct the review, and ultimately selected Anjan Sahni and Hallie B. Levin from WilmerHale.

WilmerHale appear to be a reputable law firm, although their actual name is "Wilmer Cutler Pickering Hale and Dorr", which is pretty ridiculous.

But yeah—does outside view say that these things are usually a stitch up? I don't know. I briefly searched for stats on the fraction of "independent investigations" that lead to CEOs getting fired, but couldn't find anything useful.

One might also wonder: can we trust OpenAI's March 2024 board to write a honest summary of the WilmerHale report? "Honest" in the sense of "no literal falsehoods"—my guess is "yes". "Honest" in the sense of "not deliberately misleading"—no. We should expect the March 2024 board to craft their summary of the WilmerHale report according to their own aims (much as any board would do).

So, what might have been omitted from the summary? If the WilmerHale report documented behaviour from Sam that the new board thought egregious, then they would have fired Sam. So we need to constrain our speculation to things which are bad, but not bad enough to undermine the board's support for Sam.

Who was on the OpenAI board when the conclusion of the WilmerHale investigation was announced?

  • Bret Taylor (chairman)
  • Lawrence Summers
  • Adam D'Angelo
  • Anonymous Microsoft employee (observer, no voting rights)

So: Bret Taylor and Larry Summers read the report and concluded that Sam is the right CEO for OpenAI. Adam D'Angelo may or may not have agreed (he'd have lost 2-1 on a vote).

On the same day, Sam Altman rejoined the board, and the following new board members were added:

  • Sue Desmond-Hellmann
  • Nicole Seligman
  • Fidji Simo

Presumably the three new arrivals also read the WilmerHale report. So we have at least five people who read the report and concluded that Sam is the right CEO for OpenAI. Probably we should count Satya Nadella as a sixth, even though Microsoft has an observer-only role.

So let's recap. The claim at stake is: Sam lied to OpenAI board members to try to get Helen Toner removed from the board. Specifically, Sam told several board members that Tasha McCauley wanted Helen Toner removed from the board, and he knew this was untrue.

The claim is asserted by Helen Toner and at least one other anonymous source. We don't have non-testimonial evidence to support the claim.

The claim is indirectly denied—or accepted yet seen as insufficiently damning—by power players in the November shenanigans (e.g. Satya Nadella), WilmerHale ("conduct did not mandate removal"), and at least five of the current board members. It's also indirectly denied by Sam ("very significantly disagree with her recollection").

My takeaway: Equivocal. It's a big deal if true, but the evidence is far from decisive. If anything, the balance of public evidence suggests that Sam did not make an egregious attempt to mislead the board.

1.5 There were more examples

Toner: There's more individual examples. For any individual case, Sam could always come up with some kind of innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever. The end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us.

The board members who voted to fire Sam Altman are: Ilya Sutskever, Adam D'Angelo, Tasha McCauley and Helen Toner.

Here, Helen claims that all four of these people "came to the conclusion that [they] just couldn't believe the things that Sam was telling us".

This claim is consistent with the board's original statement that Sam was not "consistently candid". It is consistent with the WilmerHale report ("breakdown of trust"). And Helen's account has not been disputed by any of the other three board members.

My takeaway: Shrug. I believe that the board members reached this conclusion—the question is whether it was justified.

Claim 2. Two executives said that Sam should not be CEO

Toner: Then mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him, about the toxic atmosphere he was creating.

They used the phrase "psychological abuse," telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues. They've since tried to minimize what they told us, but these were not casual conversations. They're really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations.

Several reports have suggested that the two executives were Mira Murati and Ilya Sutskever. Let's assume that is correct. Here's the New York Times:

Ms. Murati wrote a private memo to Mr. Altman raising questions about his management and also shared her concerns with the board. That move helped to propel the board’s decision to force him out, according to people with knowledge of the board’s discussions who asked for anonymity because of the sensitive nature of a personnel issue.

Around the same time, Ilya Sutskever, a co-founder and chief scientist of OpenAI, expressed similar worries, citing what he characterized as Mr. Altman’s history of manipulative behavior, the people said. Both executives described a hot-and-cold relationship with Mr. Altman. Though it was not clear whether they offered specific examples, the executives said he sometimes created a toxic work environment by freezing out executives who did not support his decisions, the people said.

Mr. Sutskever’s lawyer, Alex Weingarten, said claims that he had approached the board were “categorically false.”

Marc H. Axelbaum, a lawyer for Ms. Murati, said in a statement: “The claims that she approached the board in an effort to get Mr. Altman fired last year or supported the board’s actions are flat wrong. She was perplexed at the board’s decision then, but is not surprised that some former board members are now attempting to shift the blame to her.”

In a message to OpenAI employees after publication of this article, Ms. Murati said she and Mr. Altman “have a strong and productive partnership and I have not been shy about sharing feedback with him directly.”

She added that she did not reach out to the board but “when individual board members reached out directly to me for feedback about Sam, I provided it — all feedback Sam already knew,” and that did not mean she was “responsible for or supported the old board’s actions.”

So: Murati (via her lawyer) denies that she "approached the board in an effort to get Mr Altman fired". She also denies that she supported the board's decision to fire Sam. She confirms that she gave the board feedback about Sam, but notes that this was "all feedback Sam already knew".

Should we believe Murati? Well—it appears that her decisions were important for Sam's reinstatement as CEO. At one point, while Murati was interim CEO, anonymous sources told Bloomberg that she was planning to rehire Altman. Since November, Mira has continued in her role as CTO. She also remains a prominent figure (e.g. she hosted the GPT-4o demo). It seems unlikely, but not inconceivable, that this would happen if she had tried to get Sam fired.

Sutskever (via his lawyer) also denies that he approached the board (in the same NYT article, quoted above). He does not deny that he shared concerns about Sam to the board. And, of course, he voted for Sam's removal.

Sutskever's role in the November weekend—and his thoughts on what happened—remain unclear. He voted for Sam's removal, but on November 20th tweeted:

I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

Then, on December 5th, he tweeted:

I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to.

The tweet was deleted a few hours later.

Sutskever then disappeared from the public eye. He made no further public statements until May 15th, when he announced his resignation:

After almost a decade, I have made the decision to leave OpenAI.  The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm.  It was an honor and a privilege to have worked together, and I will miss everyone dearly.   So long, and thanks for everything.  I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

The announcement was followed by a picture of him with his arms around Sam Altman and Greg Brockman.

This is all a bit weird.

Actually it's so weird, and so hard to read, that I'm just going to shrug and move on.

To wrap this section, let's get back to Helen's remarks. I'll quote them again here:

Toner: Then mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him, about the toxic atmosphere he was creating.

They used the phrase "psychological abuse," telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues.

They've since tried to minimize what they told us, but these were not casual conversations. They're really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations.

There's nothing from Sutskever to contradict Helen's account. However, Mira's account does appear to contradict Helen's: Mira claims she didn't want Sam fired, while Helen claims that both execs told her "they didn't think [Sam] was the right person to lead the company to AGI". Either Helen or Mira is not being candid here, or Helen is not referring to a conversation with Mira, but with some other executive.

If Helen wanted to defend her account, she could release the "screenshots and documentation" provided by the execs, and any other meeting notes she took. Mira could do something similar. Of course, both of them may be constrained by legal or moral obligations. For now, we just have their testimony.

My takeaway: He said, she said. Overall, I'm equivocal.

Sutskever's behaviour worries me though. A man under intense pressure, for sure. A victim of Altman's "psychological abuse"? Maybe…? 

I've focussed on Helen's claim that two execs wanted Sam gone. I've ignored her claims that they reported "they couldn't trust him", a "toxic atmosphere" and "psychological abuse". The main reason is that I don't think we should update much on general claims of this kind. I explain why below.

Aside: how should we update on reports that staff are "scared of their CEO", that a CEO "creates a toxic culture", that a CEO "can't be trusted", etc?

Are OpenAI staff unusually afraid of Sam? Does Sam create an unusually toxic culture of fear, manipulation and lies?

My guess is that, for roughly all companies on the S&P 500, a journalist could easily find 5+ employees willing to anonymously testify that they are scared of their CEO, that CEO is manipulative, creates a "toxic culture", and so on. I think we should be basically unmoved by general claims like this. The things we should take seriously are (a) specific claims of misbehaviour supported by evidence, and (b) non-anonymous testimony from credible sources.

It would be great if someone wrote a post like this to review all the cases of (a) and (b). I started trying this myself, but I just don't have time. For now I'll just flag the most concerning instance of (b) that I've seen, namely this tweet by Geoffrey Irving:

my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

Presumably, now that the NDA and non-disparagement paperwork has been relaxed, we'll see more people sharing their stories.

Claim 3. The board were afraid that Sam would undermine them if they tried to fire him

Toner: Once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him. We were very careful, very deliberate about who we told, which was essentially almost no one in advance other than obviously our legal team. That's what took us to November 17th.

This is a plausible description of the beliefs of the four board members who decided to fire Sam.

It fits with many reported details of the November events, for example the fact that Altman has no advance warning, that Satya Nadella was not informed in advance, and that Murati was only informed the evening before.

Events proved that the board were justified in their belief. Is this damning for Sam? Do Sam's actions contradict his previous statements that the board should be able to fire him?

In short: no.

If a board tries to fire the CEO, the CEO doesn't have to just say "oh ok, I'm done". There a bunch of acceptable ways in which the CEO can fight their board.

The November board had the ability to fire Sam. It didn't work out because they didn't make a persuasive case to key OpenAI staff and stakeholders.

In Sam's mind, the board was trying to fire him without good reason. So, he concludes, the problem is the board, not him. He might have been right about this.

All this is compatible with Sam sincerely believing that OpenAI should have a board that can fire him. The old board had that ability, and so does the new one. It's just an ability that is constrained by the need to avoid angering key staff and stakeholders so much that they threaten to destroy the company. That constraint on a board's power seems... normal?

The confusing thing here is that the effective power of a board to fire the CEO varies depends on the views of key stakeholders. If everyone except the board wants to keep the CEO, then the board has much less power than if everyone including the board wants the CEO gone. They'll have to talk some people around to their view. 

It would be interesting to investigate what we know about what happened during the days when Sam was not CEO of OpenAI, with a view to evaluating Sam's behaviour during that period. I'm not going to do that here (except insofar as required to discuss claim 4, below).

My takeaway: yes, people normally resist attempts to fire them. The board was right to worry about this. I'd like to know more about what Sam did during the days between his firing and his reinstatement.

Claim 4. Why did OpenAI employees support Sam?

In the final part of her remarks on the November events, Toner offers her account of why OpenAI employees protested the board's decision so forcefully.

As a reminder: in response to the board's actions, some 95% of employees signed a letter threatening to quit unless Sam Altman and Greg Brockman were reinstated. The letter read:

To the Board of Directors at OpenAI,

OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.

When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.

Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

On the face of it, the employee response makes Sam look like an unusually loved CEO, with a high level of trust and support within the company. That seems to put the views of the employees in stark opposition to those of the board.

Helen needs to explain the employee response in a different light. In particular, she needs to say that the employees were not sincerely expressing a belief that Sam Altman is the right person to lead the company to safe AGI. Or, alternatively, to concede that the employees were expressing this belief, but explain why they were wrong. 

1. The situation was incorrectly portrayed to employees

Toner: really pretty early on, the way the situation was being portrayed to people inside the company was you have two options. Either Sam comes back immediately with no accountability, totally new board of his choosing, or the company will be destroyed. Those weren't actually the only two options, and the outcome that we eventually landed on was neither of those two options. 

Presumably Sam and his allies tried to shape the narrative to their advantage. It would be weird if they didn't.

Let's assume Helen is suggesting foul play here, rather than just describing what happened. So the claim is that employees were told a misleading "you have two options" story. I'm not sure what to make of this, because the disjunct seems to have mainly been underwritten by the fact that so many employees threatened to quit.

Were employees mislead in order to create support for this disjunct? For example, they might have been told that whether or not they threaten quit, major stakeholders would pull the plug on a "no Sam OpenAI" regardless. If a claim like that were made—and it were untrue—we'd have grounds for a charge of foul play. Helen doesn't specify, and I've not seen public evidence on this, so I'll just move on.

My take: Shrug.

2. Employees were acting out of sentiment, self-interest and fear

Toner: But I get why not wanting the company to be destroyed got a lot of people to fall in line, whether because they were in some cases about to make a lot of money from this upcoming tender offer, or just because they loved their team, they didn't want to lose their job, they cared about the work they were doing. Of course, a lot of people didn't want the company to fall apart, us included.

The second thing I think it's really important to know that has really gone underreported is how scared people are to go against Sam. They had experienced him retaliating against people, retaliating against them for past instances of being critical. They were really afraid of what might happen to them. When some employees started to say, "Wait, I don't want the company to fall apart, let's bring back Sam," it was very hard for those people who had had terrible experiences to actually say that for fear that if Sam did stay in power as he ultimately did, that would make their lives miserable.

So: many employees were motivated by sentiment ("they loved their team", "they cared about the work they were doing") and/or self-interest ("about to make a lot of money", "they didn't want to lose their job") and/or fear ("they were really afraid of what might happen to them").

Sounds plausible! Humans gonna human. The key claim here is that the employees support for Sam was so dictated by these motives that it gives us little or no signal about the merits of the board's decision (by the lights of the board's mission to ensure that OpenAI creates AI that benefits all of humanity). This is a strong claim, and Helen does not establish it.

My take: Surely true in part. Employee support for Sam is only modest evidence that the board's decision was wrong.

Aside: my friends are persuasive, my enemies are manipulative

The boundary between persuasion and manipulation is blurry. When we're mad at someone, we're much more likely to interpret an attempt at persuasion as an attempt at manipulation (thanks to Kat Woods for this point).

Did Sam get himself reinstated via ethical persuasion or unethical manipulation? Have you seriously considered the first possibility?

Claim 5. Sam was fired from Y Combinator

Toner: This actually isn't a new problem for Sam. If you look at some of the reporting that has come out since November, it's come out that he was actually fired from his previous job at Y Combinator, which was hushed up at the time.

Helen seems to be referring to an article in the Washington Post, which was published November 22 2023. The article cites three anonymous sources claiming that Altman was "asked to leave". The article was titled "Sam Altman’s been fired before. The polarizing past of OpenAI’s reinstated CEO."

Helen seems to have missed The Wall Street Journal's article, published December 26 2023, in which Paul Graham said "it would be wrong to use the word 'fired'".

In response to Helen's claim, Paul Graham issued a clarification:

I got tired of hearing that YC fired Sam, so here's what actually happened:

People have been claiming YC fired Sam Altman. That's not true. Here's what actually happened. Far several years he was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time to OpenAI, we should find someone else to run YC, and he agreed. If he'd said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we'd have been fine with that too. We didn't want him to leave, just to choose one or the other.

Should we trust Paul and Jessica on this? I say: "yes". If Sam was fired from YC, it's hard to see why they would want to go to bat for him. Is Sam coercing them somehow? This seems unlikely… they are rich and powerful, and both Paul and Jessica seem like the kind of people who would react very aggressively to such an attempt. Sam would need to have something very good.

Paul Graham's December statement was not prominently reported, so it's understandable that Helen could have missed it. However—a Google search for "altman y combinator fired" would have surfaced the article, so it seems like she didn't fact-check this point before the interview.

My takeaway: The claim is false. Mistakes happen, but this is a big one, in the circumstances.

Claim 6. Senior managers at Loopt asked the board to fire Altman (twice).

Helen continues:

And then at his job before that, which was his only other job in Silicon Valley, his startup Loopt, apparently the management team went to the board there twice and asked the board to fire him for what they called deceptive and chaotic behavior.

The source for this claim appears to be a December 2023 article in the Wall Street Journal, which reports:

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter. But the board, with support from investors at venture-capital firm Sequoia, kept Altman until Loopt was sold in 2012.

This article appears to be the first time that the story about events at Loopt entered the public domain.

My takeaway: I'd like to know more. But this is anonymous testimony provided more than 10 years after the fact. The most common cause of startup failure is: people falling out with each other. And I generally don't update much on claims like these. I won't read much into this until further details emerge.

Claim 7. This wasn't a problem specific to the personalities on the board

Toner: This wasn't a problem specific to the personalities on the board as much as he would love to portray it that way.

On priors, I have quite a high credence on this theory. Personality differences partly or fully explain a lot of disagreements. 

One reason for insisting on hard evidence of serious misconduct is that such evidence can be persuasive to a wide range of personality types.

My takeaway: I'm at 2/5 that personality differences do explain most of the “breakdown in relationship” between Sam and the board. My credence on this is not mostly based on the fact that Sam has made this claim. Helen didn't update me much either way on this. 

Reviewing the claims

Here's a review:

Claim My takeaway
1. Altman withheld information, misrepresented things, and in some cases "outright lied" to the board.  
1.1 The board was not informed in advance about ChatGPT. This claim is true. But, it's not clearly a big deal. It would be a big deal if it violated a specific rule or expectation set by the board, but the existence of something like that has not been made public.
1.2 Sam didn't inform the board that he owned the OpenAI Startup Fund even though he “constantly” claimed to have no financial interest in the company. Sam has not directly claimed he has no financial interest in the company, at least in public. OpenAI claims that his ownership of the Startup Fund did not entail financial interest, anyway. Overall: shrug.
1.3 Sam gave inaccurate information about formal safety processes. Could be a big deal, but we've no details or non-testimonial evidence.
1.4 Sam lied to other board members while trying to remove Helen from the board. Equivocal. It's a big deal if true, but the evidence is far from decisive. If anything, the balance of public evidence suggests that Sam did not make an egregious attempt to mislead the board.
1.5 There were more examples. Shrug. I believe that the board members reached this conclusion—the question is whether it was justified.
   
2. Two executives said that Sam should not be CEO 

He said, she said. Overall, I'm equivocal.

 

Sutskever's behaviour worries me though. A man under intense pressure, for sure. A victim of Altman's "psychological abuse"? Maybe…?

   
3. The board were afraid that Sam would undermine them if they tried to fire him. 

Yes, people normally resist attempts to fire them. The board was right to worry about this.

 

I'd like to know more about what Sam did during the days between his firing and his reinstatement.

   
4. Why did OpenAI employees support Sam?  
4.1. The situation was incorrectly portrayed to employees. Shrug.
4.2. Employees were acting out of sentiment, self-interest and fear. Surely true in part. Employee support for Sam is only modest evidence that the board's decision was wrong.
   
5. Sam was fired from Y Combinator. The claim is false. Mistakes happen, but this is a big one, in the circumstances.
6. Senior managers at Loopt asked the board to fire Atlman (twice). I'd like to know more. But this is anonymous testimony provided more than 10 years after the fact. The most common cause of startup failure is: people falling out with each other. And I generally don't update much on claims like these. I won't read much into this until further details emerge.
7. This wasn't a problem specific to the personalities on the board I'm at 2/5 that personality differences do explain most of the "breakdown in relationship" between Sam and the board. My credence on this is not mostly based on the fact that Sam has made this claim. Helen didn't update me much either way on this.

So, how did I update on Helen's interview?

In short: not much!

She didn't show me a smoking gun.

Claims (1.1) to (1.5) could become smoking guns if further evidence comes out. For now I'm agnostic.

Claim (2) is disputed, without evidence to settle it. Claim (3) doesn't tell me much.

The aggregate of (1.1) to (3) give me a modest "where there's smoke, there's fire" update against Sam.

Claims (4.1) and (4.2) are plausible, but I already believed something along those lines.

Claim (5) is false.

Claim (6) doesn't tell me much.

Claim (7) was not established.

Overall, the interview somewhat reduced my confidence in Helen's assessment of Sam. The main reasons are:

(a) Still no smoking gun. 

(b) Helen incorrectly claimed that Sam was fired from Y Combinator. 

(c) Helen presented claims (1.1) and (1.2) as a big deal, but they may not have been.

(d) Helen presented the Loopt story with more confidence than the public evidence supports.

(e) Overall, this "come out swinging" interview was unimpressive, despite unlimited prep time.

My modest negative update against Helen's assessment of Sam was larger than my "where there's smoke, there's fire" update against Sam. So, on net, Helen's interview gave me a small positive update on Sam.

My overall view on Sam Altman & x-risk

Arguing for my overall view on Sam would take ages, so I'll just share it for context.

From an AI x-risk perspective, I currently think that having Sam as CEO of a frontier AI lab is something like: 1/3 chance net positive, ⅓ chance neutral, ⅓ chance net negative. My error bars are wide.

My view is based almost entirely on public information. Weigh it accordingly.

A major limitation of this post is that there are very many other things you should consider in order to form your overall view on Sam.

I take a significantly dimmer view of Sam Altman than I did a month ago, partly due to Kelsey Piper's revelations about the NDAs, non-disparagement and equity threats, and partly due to revelations from Jan Leike and Leopold Aschenbrenner.


Views are my own. I have no direct financial interest in OpenAI, sadly. I own some MSFT and NVDA.

Thanks to Rob Bensinger for comments on a draft of this post. Note that he disagrees with many parts of it.


Appendix 1. Some ways I might be getting this wrong

1. Helen and the board emphasise a "pattern of behaviour", so I should not be looking for a smoking gun.

I'm sympathetic to this. But I'd be much more sympathetic if there were lots of solid evidence of medium-grade misconduct. There's some, but much of it is frustratingly ambiguous. The negative evidence we do have contributes to my ⅓ credence on Sam's leadership of a frontier lab being neutral or net negative for x-risk.

2. Many examples of OpenAI's macro-strategy over the past few years supports the boards decision to fire Sam, e.g. because the actual strategy is incongruous with the stated aims.

We can give a bunch of examples either way on this. That's a huge thing, I won't try here.

FWIW, though: I think that some people give Sam insufficient credit for his—seemingly deliberate—contribution to the huge Overton Window shift that happened during 2023.

3. This is all based on public information. Private information may paint a very different story.

Could be! Some definitely think so.

Personally, I have very little private information—that may partly or mostly explain why my views on Sam are more equivocal than others.

4. This whole post is a distraction from more important events.

My main reservation about this post is that maybe the stuff I've been looking at is relatively small potatoes. That is, there are there are far more significant public events that we can assess and update on (e.g. the NDA stuff, the 20% compute thing, etc etc etc…).

So, the worry goes, perhaps I've just written something that's basically a distraction. If you want your life back after reading this post, I apologise.

Appendix 2. Transcript of Helen Toner's TED podcast

Interviewer: Welcome to the show.

Helen Toner: Hey, good to be here.

Interviewer: So Helen, a few weeks back at TED in Vancouver, I got the short version of what happened at OpenAI last year. I'm wondering, can you give us the long version?

Toner: As a quick refresher on the context here, the OpenAI board was not a normal board. It's not a normal company. The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's public good mission was primary, was coming first over profits, investor interests, and other things. But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.

Interviewer: At this point, everyone always says, "Like what? Give me some examples."

Toner: And I can't share all the examples, but to give a sense of the kind of thing that I'm talking about, it's things like when ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter. Sam didn't inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change. And then a last example that I can share, because it's been very widely reported, relates to this paper that I wrote, which has been, I think, way overplayed in the press.

Interviewer: For listeners who didn't follow this in the press, Helen had co-written a research paper last fall intended for policymakers. I'm not going to get into the details, but what you need to know is that Sam Altman wasn't happy about it. It seemed like Helen's paper was critical of OpenAI and more positive about one of their competitors, Anthropic. It was also published right when the Federal Trade Commission was investigating OpenAI about the data used to build its generative AI products. Essentially, OpenAI was getting a lot of heat and scrutiny all at once.

Toner: The way that played into what happened in November is pretty simple. It had nothing to do with the substance of this paper. The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board. It was another example that really damaged our ability to trust him. It actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.

There's more individual examples. For any individual case, Sam could always come up with some kind of innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever. The end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money. Not trusting the word of the CEO who is your main conduit to the company, your main source of information about the company is just totally impossible.

Toner: That was kind of the background, the state of affairs coming into last fall. We had been working at the board level as best we could to set up better structures, processes, all that kind of thing to try and improve these issues that we had been having at the board level. Then mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him, about the toxic atmosphere he was creating. They used the phrase "psychological abuse," telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change, no point in giving him feedback, no point in trying to work through these issues. They've since tried to minimize what they told us, but these were not casual conversations. They're really serious to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about of him lying and being manipulative in different situations. This was a huge deal. This was a lot.

Toner: We talked it all over very intensively over the course of several weeks and ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI as an organization would be to bring on a different CEO. Once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him. We were very careful, very deliberate about who we told, which was essentially almost no one in advance other than obviously our legal team. That's what took us to November 17th.

Interviewer: Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff supporting his return. What exactly happened there? Why was there so much pressure to bring him back?

Toner: Yeah, this is obviously the elephant in the room. Unfortunately, I think there's been a lot of misreporting on this. I think there were three big things going on that helped make sense of what happened here. The first is that really pretty early on, the way the situation was being portrayed to people inside the company was you have two options. Either Sam comes back immediately with no accountability, totally new board of his choosing, or the company will be destroyed. Those weren't actually the only two options, and the outcome that we eventually landed on was neither of those two options. But I get why not wanting the company to be destroyed got a lot of people to fall in line, whether because they were in some cases about to make a lot of money from this upcoming tender offer, or just because they loved their team, they didn't want to lose their job, they cared about the work they were doing. Of course, a lot of people didn't want the company to fall apart, us included.

Toner: The second thing I think it's really important to know that has really gone underreported is how scared people are to go against Sam. They had experienced him retaliating against people, retaliating against them for past instances of being critical. They were really afraid of what might happen to them. When some employees started to say, "Wait, I don't want the company to fall apart, let's bring back Sam," it was very hard for those people who had had terrible experiences to actually say that for fear that if Sam did stay in power as he ultimately did, that would make their lives miserable.

Toner: I guess the last thing I would say about this is that this actually isn't a new problem for Sam. If you look at some of the reporting that has come out since November, it's come out that he was actually fired from his previous job at Y Combinator, which was hushed up at the time. And then at his job before that, which was his only other job in Silicon Valley, his startup looped, apparently the management team went to the board there twice and asked the board to fire him for what they called deceptive and chaotic behavior. If you actually look at his track record, he doesn't exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the board as much as he would love to portray it that way.

[Interview continues on other topics.]


  1. ^

      I wondered if I could rule this out based on the Wall Street Journal article. My thought was: it'd be weird to cite Helen anonymously as "people familiar with the situation" in an article based on an interview with Helen. I'm not familiar with journalistic norms here, but I guess an interviewee can opt to give particular statements anonymously, and these can be reported in the same article?
     

  2. ^

     So far as I can tell, Tasha is not on the public record making the specific claim “Sam misrepresented my perspective to other board members”. In case you're wondering: Helen and Tasha's co-authored article in The Economist does not include the claim that Sam misrepresented Tasha's perspective.

New Comment
2 comments, sorted by Click to highlight new comments since:

The WSJ article says the following:

The increasing amount of time Altman spent at OpenAI riled longtime partners at Y Combinator, who began losing faith in him as a leader. The firm’s leaders asked him to resign, and he left as president in March 2019.

Graham said it was his wife’s doing. “If anyone ‘fired’ Sam, it was Jessica, not me,” he said. “But it would be wrong to use the word ‘fired’ because he agreed immediately.” 

I don't think it's fair to say that claim 5 was knowably, obviously false at the time it was made, based on this.  The above two paragraphs really sound like "Sam Altman was fired from YCombinator".  Now, it's possible that the journalist who wrote this was engaging in selective quotation and the non-quoted sections are deliberately misleading.  This is compatible with PG's recent clarification on Twitter.  But I think it'd be stranger to read those two paragraphs and then believe that he wasn't fired, than to believe that he was fired.  In isolation, PG's rejection of the word "fired" because "he agreed immediately" is nonsensical.  Agreeing to be fired is still being fired.

I still have substantial uncertainty about what happened here.  "The firm’s leaders asked him to resign" is a pretty straightforward claim about reality written in the journalist's voice, and I would be somewhat surprised if the journalist knew that Paul & Jessica had (claimed) to have presented Sam with the "choose one" option and decided to describe that as "asked him to resign".  That's less "trying to give people a misleading impression" and more "lying about an obvious matter of fact".

In isolation, PG's rejection of the word "fired" because "he agreed immediately" is nonsensical. Agreeing to be fired is still being fired.

It is nonsensical to read it as not being fired even with pg's logic-chopping "clarification". They issued an ultimatum: step down from OA or be fired from YC CEO. He did not step down. Then he was fired from YC CEO. (And pulled shenanigans on the way out with the 'YC Chair' and 'advisor' business, further emphasizing that it was a firing.)