It is largely over.

The investigation into events has concluded, finding no wrongdoing anywhere.

The board has added four new board members, including Sam Altman. There will still be further additions.

Sam Altman now appears firmly back in control of OpenAI.

None of the new board members have been previously mentioned on this blog, or known to me at all.

They are mysteries with respect to AI. As far as I can tell, all three lack technical understanding of AI and have no known prior opinions or engagement on topics of AI, AGI and AI safety of any kind including existential risk.

Microsoft and investors indeed so far have came away without a seat. They also, however, lack known strong bonds to Altman, so this is not obviously a board fully under his control if there were to be another crisis. They now have the gravitas the old board lacked. One could reasonably expect the new board to be concerned with ‘AI Ethics’ broadly construed in a way that could conflict with Altman, or with diversity, equity and inclusion.

One must also remember that the public is very concerned about AI existential risk when the topic is brought up, so ‘hire people with other expertise that have not looked at AI in detail yet’ does not mean the new board members will dismiss such concerns, although it could also be that they were picked because they don’t care. We will see.

Prior to the report summary and board expansion announcements, The New York Times put out an article leaking potentially key information, in ways that looked like an advance leak from at least one former board member, claiming that Mira Murati and Ilya Sutskever were both major sources of information driving the board to fire Sam Altman, while not mentioning other concerns. Mira Murati has strongly denied these claims and has the publicly expressed confidence and thanks of Sam Altman.

I continue to believe that my previous assessments of what happened were broadly accurate, with new events providing additional clarity. My assessments were centrally offered in OpenAI: The Battle of the Board, which outlines my view of what happened. Other information is also in OpenAI: Facts From a Weekend and OpenAI: Altman Returns.

This post covers recent events, completing the story arc for now. There remain unanswered questions, in particular what will ultimately happen with Ilya Sutskever, and the views and actions of the new board members. We will wait and see.

The New Board

The important question, as I have said from the beginning, is: Who is the new board?

We have the original three members, plus four more. Sam Altman is one very solid vote for Sam Altman. Who are the other three?

We’re announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors. 

Sue, Nicole and Fidji have experience in leading global organizations and navigating complex regulatory environments, including backgrounds in technology, nonprofit and board governance. They will work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Sam and OpenAI’s senior management. 

Bret Taylor, Chair of the OpenAI board, stated, “I am excited to welcome Sue, Nicole, and Fidji to the OpenAI Board of Directors. Their experience and leadership will enable the Board to oversee OpenAI’s growth, and to ensure that we pursue OpenAI’s mission of ensuring artificial general intelligence benefits all of humanity.”

Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President’s Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020. From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs. 

Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards – Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters. She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City. Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States.

Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world’s leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App. Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders and serves as President of the Metrodora Foundation.

This tells us who they are in some senses, and nothing in other important senses.

I did some quick investigation, including asking multiple LLMs and asking Twitter, about the new board members and the implications.

It is not good news.

The new board clearly looks like it represents an attempt to pivot:

  1. Towards legitimacy, legibility and credibility. Gravitas to outsiders.
  2. Towards legal and regulatory expertise, especially via Seligman.
  3. Towards traditional corporate concerns and profit maximization.
  4. Towards broadly ‘AI Ethics,’ emphasis on social impact, perhaps DEI.
  5. Away from people who understand the technology behind AI.
  6. Away from people concerned about existential risk.

I have nothing against any of these new board members. But neither do I have anything for them, either. I have perhaps never previously heard their names.

We do have this tiny indirect link to work with, I suppose, although it is rather generic praise indeed:

Image

Otherwise this was the most positive thing anyone had to say overall, no one had anything more detailed that this in any direction:

Nathan Helm-Burger: Well, they all sound competent, charitably inclined, and agentic.

Hopefully they are also canny, imaginative, cautious, forward-thinking and able to extrapolate into the future… Those attributes are harder to judge from their bios.

The pivot away from any technological domain knowledge, towards people who know other areas instead, is the most striking. There is deep expertise in non-profits and big corporations, with legal and regulatory issues, with major technologies and so on. But these people (presumably) don’t know AI, and seem rather busy in terms of getting up to speed on what they may or may not realize is their most important job they will ever have. I don’t know their views on existential risk because none of them have, as far as I know, expressed such views at all. That seems not to be a concern here at all.

Contrast this with a board that included Toner, Sutskever and Brockman along with Altman. The board will not have anyone on it that can act as a sanity check on technical claims or risk assessments from Altman, perhaps D’Angelo will be the closest thing left to that. No one will be able to evaluate claims, express concerns, and ensure everyone has the necessary facts. It is not only Microsoft that got shut out.

So this seems quite bad. In a normal situation, if trouble does not find them, they will likely let Altman do whatever he wants. However, with only Altman as a true insider, if trouble does happen then the results will be harder to predict or control. Altman has in key senses won, but from his perspective he should worry he has perhaps unleashed a rather different set of monsters.

This is the negative case in a nutshell:

Jaeson Booker: My impression is they’re a bunch of corporate shills, with no knowledge of AI, but just there to secure business ties and use political/legal leverage.

And there was also this in response to the query in question:

Day to day, Altman has a free hand, these are a bunch of busy business people.

However, that is not the important purpose of the board. The board is not there to give you prestige or connections. Or rather, it is partly for that, but that is the trap that prestige and connections lay for us.

The purpose of the board is to control the company. The purpose of the board is to decide whether to fire the CEO, and to choose future iterations of the board.

The failure to properly understand this before is part of how things got to this point. If the same mistake is repeating itself, then so be it.

The board previously intended to have a final size of nine members. Early indications are that the board is likely to further expand this year.

I would like to see at least one person with strong technical expertise other than Sam Altman, and at least one strong advocate for existential risk concerns.

Altman would no doubt like to see Brockman come back, and to secure those slots for his loyalists generally as soon as possible. A key question is whether this board lets him do that.

One also notes that they appointed three women on International Women’s Day.

The Investigation Probably Was Not Real

Everyone is relieved to put this formality behind them, Sam Altman most of all. Even if the investigation was never meant to go anywhere, it forced everyone involved to be careful. That danger has now passed.

Washington Post: In a summary OpenAI released of the findings from an investigation by the law firm WilmerHale into Altman’s ouster, the law firm found that the company’s previous board fired Altman because of a “breakdown in the relationship and loss of trust between the prior board and Mr. Altman.” Brockman, Altman’s close deputy, was removed from OpenAI’s board when the decision to fire the CEO was announced.

The firm did not find any problems when it came to OpenAI’s product safety, finances or its statements to investors, OpenAI said. The Securities and Exchange Commission is probing whether OpenAI misled its investors.

As part of the board announcement, Altman and Taylor held a short conference call with reporters. The two sat side by side against a red brick wall as Taylor explained the law firm’s review and how it found no evidence of financial or safety wrongdoing at the company. He referred to the CEO sitting next to him as “Mr. Altman,” then joked about the formality of the term.

“I’m pleased this whole thing is over,” Altman said. He said he was sorry for how he handled parts of his relationship with a prior board member. “I could have handled that situation with more grace and care. I apologize for that.”

Indeed, here is their own description of the investigation, in full, bold is mine:

On December 8, 2023, the Special Committee retained WilmerHale to conduct a review of the events concerning the November 17, 2023 removal of Sam Altman and Greg Brockman from the OpenAI Board of Directors and Mr. Altman’s termination as CEO. WilmerHale reviewed more than 30,000 documents; conducted dozens of interviews, including of members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; and evaluated various corporate actions.

The Special Committee provided WilmerHale with the resources and authority necessary to conduct a comprehensive review. Many OpenAI employees, as well as current and former Board members, cooperated with the review process. WilmerHale briefed the Special Committee several times on the progress and conclusions of the review.

WilmerHale evaluated management and governance issues that had been brought to the prior Board’s attention, as well as additional issues that WilmerHale identified in the course of its review. WilmerHale found there was a breakdown in trust between the prior Board and Mr. Altman that precipitated the events of November 17.

WilmerHale reviewed the public post issued by the prior Board on November 17 and concluded that the statement accurately recounted the prior Board’s decision and rationales. WilmerHale found that the prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. WilmerHale also found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman. WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.

After reviewing the WilmerHale findings, the Special Committee recommended to the full Board that it endorse the November 21 decision to rehire Mr. Altman and Mr. Brockman. With knowledge of the review’s findings, the Special Committee expressed its full confidence in Mr. Altman and Mr. Brockman’s ongoing leadership of OpenAI.

The Special Committee is pleased to conclude this review and looks forward to continuing with the important work of OpenAI.

And consider this tidbit from The Washington Post:

One person familiar with the investigation who had been interviewed by the firm said WilmerHale did not offer a way to confidentially share relevant information.

If Altman is doing the types of things people say he is doing, and you do not offer a confidential way to share relevant information, that tells me you are not so interested in finding wrongdoing by Sam Altman.

Taken together with an inability to offer confidentiality, it would be difficult for a summary statement to louder scream the message ‘we wanted this all to go away quietly and had no interest in a real investigation if we can avoid one.’ There was still the possibility that one could not be avoided, if the gun was sufficiently openly smoking. That turned out not to be the case. So instead, they are exonerating the board in theory, saying they messed up in practice, and moving on.

One detail that I bolded is that the board did not anticipate that firing Altman would destabilize OpenAI, that they thought he would not fight back. If true, then in hindsight this looks like a truly epic error on their part.

But what if it wasn’t, from their perspective at the time? Altman had a very real choice to make.

  1. Destabilize OpenAI and risk its disintegration to fight for his job, in a way that very rarely happens when CEOs are fired by their boards. Consider the removals at Uber and WeWork.
  2. Do what he said he would do at the initial board meeting and help with the transition, then move on to his next thing. Maybe raise a ton of money for chip factories or energy production, or a rival AI company. Plenty of people would have rushed to fund him, and then he’d have founder equity.

Altman decided to fight back in a way rarely seen, and in a way he did not do at YC when he was ejected there. He won, but there was real risk that OpenAI could have fallen apart. He was counting on the board both to botch the fight, and then to cave rather than let OpenAI potentially fail. And yes, he was right, but that does not mean it wasn’t a gamble, or that he was sure to make it.

I still consider this a massive error by the board, for five reasons.

  1. Character is fate. Altman has a history and reputation of being excellent in and relishing such fights. Altman was going to fight if there was a way to fight.
  2. The stakes are so high. OpenAI potentially is fate-of-the-world level stakes.
  3. There was little lock-in. OpenAI is nothing without its people or relationship with Microsoft, in a way that is not true for most companies.
  4. Altman had no equity. He loses nothing if the company is destroyed.
  5. The board knew it had in other ways a precarious position, lacking gravitas, the trust of the employees and the full support of their new interim CEO or a secured strong pick for a permanent replacement, and that they were unwilling to justify their actions. This was a much weaker hand than normal.

Yes, some of that is of course hindsight. There were still many reasons to realize this was a unique situation, where Altman would be uniquely poised to fight.

Early indications are that we are unlikely to see the full report this year.

The New York Times Leak and Gwern’s Analysis of It

Prior to the release of the official investigation results and announcement of expansion of the board, the New York Times reported new information, including some that seems hard to not have come from at least one board member. Gwern then offered perspective analyzing what it meant that this article was published while the final report was not yet announced.

Gwern’s take was that Altman had previously had a serious threat to worry about with the investigation, it was not clear he would be able to retain control. He was forced to be cautious, to avoid provocations.

We discussed this a bit, and I was convinced that Altman had more reason than I realized to be worried about this at the time. Even though Summers and Taylor were doing a mostly fake investigation, it was only mostly fake, and you never know what smoking guns might turn up. Plus, Altman could not be confident it was mostly fake until late in the game, because if Summers and Taylor were doing a real investigation they would have every reason not to tip their hand. Yes, no one could talk confidentially as far as Altman knew, but who knows what deals could be struck, or who might be willing to risk it anyway?

That, Gwern reported, is over now. Altman will be fully in charge. Mira Muratori (who we now know was directly involved, not simply someone initially willing to be interim CEO) and Ilya Sutskever will have their roles reduced or leave. The initial board might not formally be under Altman’s control but will become so over time.

I do not give the investigation as much credit for being a serious threat as Gwern does, but certainly it was a good reason to exercise more caution in the interim to ensure that remained true. Also Mira Murati has denied the story, and has at least publicly retained the confidence and thanks of Altman.

Here is Gwern’s full comment:

Gwern: An OA update: it’s been quiet, but the investigation is about over. And Sam Altman won.

To recap, because I believe I haven’t been commenting much on this since December (this is my last big comment, skimming my LW profile):

WilmerHale was brought in to do the investigation. The tender offer, to everyone’s relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Sematek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA).

Ilya Sutskever remains in hiding and lawyered up. There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close – and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it’s irrelevant and unimportant.)

The news today comes from the NYT leaking information from the final report: “Key OpenAI Executive [Mira Murati] Played a Pivotal Role in Sam Altman’s Ouster”.

The main theme of the article is clarifying Murati’s role: as I speculated, she was in fact telling the Board about Altman’s behavior patterns, and it fills in that she had gone further and written it up in a memo to him, and even threatened to leave with Sutskever.

But it reveals a number of other important claims: the investigation is basically done and wrapping up. The new board apparently has been chosen. Sutskever’s lawyer has gone on the record stating that Sutskever did not approach the board about Altman (?!). And it reveals the board confronted Altman over his ownership of the OA VC Fund (in addition to all his many other compromises of interest).

So, what does that mean?

I think that what these indirectly reveal is simple: Sam Altman has won. The investigation will exonerate him, and it is probably true that it was so narrowly scoped from the beginning that it was never going to plausibly provide grounds for his ouster. What these leaks are, are a loser’s spoiler move: the last gasps of the anti-Altman faction, reduced to leaking bits from the final report to friendly media (Metz/NYT) to annoy Altman, and strike first. They got some snippets out before the Altman faction shops around highly selective excerpts to their own friendly media outlets (the usual suspects – The Information, Kara Swisher) from the final officialized report to set the official record (at which point the rest of the confidential report is sent down the memory hole). Welp, it’s been an interesting few months, but l’affaire Altman is over. RIP.

Evidence, aside from simply asking who benefits from these particular leaks at the last minute, is that Sutskever remains in hiding & his lawyer is implausibly denying he had anything to do with it, while if you read Altman on social media, you’ll notice that he’s become ever more talkative since December, particularly in the last few weeks – positively glorying in the instant memeification of ‘$7 trillion’ – as has OA PR* and we have heard no more rhetoric about what an amazing team of execs OA has and how he’s so proud to have tutored them to replace him. Because there will be no need to replace him now. The only major reasons he will have to leave is if it’s necessary as a stepping stone to something even higher (eg. running the $7t chip fab consortium, running for US President) or something like a health issue.

So, upshot: I speculate that the report will exonerate Altman (although it can’t restore his halo, as it cannot & will not address things like his firing from YC which have been forced out into public light by this whole affair) and he will be staying as CEO and may be returning to the expanded board; the board will probably include some weak uncommitted token outsiders for their diversity and independence, but have an Altman plurality and we will see gradual selective attrition/replacement in favor of Altman loyalists until he has a secure majority robust to at least 1 flip and preferably 2. Having retaken irrevocable control of OA, further EA purges should be unnecessary, and Altman will probably refocus on the other major weakness exposed by the coup: the fact that his frenemy MS controls OA’s lifeblood. (The fact that MS was such a potent weapon for Altman in the fight is a feature while he’s outside the building, but a severe bug once he’s back inside.)

People are laughing at the ‘$7 trillion’. But Altman isn’t laughing. Those GPUs are life and death for OA now. And why should he believe he can’t do it? Things have always worked out for him before…

Predictions, if being a bit more quantitative will help clarify my speculations here: Altman will still be CEO of OA on June 1st (85%); the new OA board will include Altman (60%); Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%); the full unexpurgated non-summary report will not be released (85%, may be hard to judge because it’d be easy to lie about); serious chip fab/Tigris efforts will continue (75%); Microsoft’s observer seat will be upgraded to a voting seat (25%).

* Eric Newcomer (usually a bit more acute than this) asks “One thing that I find weird: OpenAI comms is giving very pro Altman statements when the board/WilmerHale are still conducting the investigation. Isn’t communications supposed to work for the company, not just the CEO? The board is in charge here still, no?” NARRATOR: “The board is not in charge still.”

That continues to be the actual key question in practice. Who controls the final board, and to what extent will that board be in charge?

We now know who that board will be, suggesting the answer is that Altman may not be in firm control of the board, but the board is likely to give him a free hand to do what he wants. For all practical purposes, Altman is back in charge until something happens.

The other obvious question is: What actually happened? What did Sam Altman do (or not do)? Why did the board try to fire Altman in the first place?

Whoever leaked to the Times, and their writers, have either a story or a theory.

From the NYT article: In October, Ms. Murati approached some members of the board and expressed concerns about Mr. Altman’s leadership, the people said.

She described what some considered to be Mr. Altman’s playbook, which included manipulating executives to get what he wanted. First, Ms. Murati said Mr. Altman would tell people what they wanted to hear to charm them and support his decisions. If they did not go along with his plans or if it took too long for them to make a decision, he would then try to undermine the credibility of people who challenged him, the people said.

Ms. Murati told the board she had previously sent a private memo to Mr. Altman outlining some of her concerns with his behavior and shared some details of the memo with the board, the people said.

Around the same time in October, Dr. Sutskever approached members of the board and expressed similar issues about Mr. Altman, the people said.

Some members of the board were concerned that Ms. Murati and Dr. Sutskever would leave the company if Mr. Altman’s behavior was not addressed. They also grew concerned the company would see an exodus of talent if top lieutenants left.

There were other factors that went into the decision. Some members were concerned about the creation of the OpenAI Startup Fund, a venture fund started by Mr. Altman.

You know what does not come up in the article? Any mention of AI safety or existential risk, any mention of Effective Altruism, or any member of the board members other than Ilya, including Helen Toner, or any attempt by Altman to alter the board. This is all highly conspicuous by its absence, even the absence of any note of its absence.

There is also no mention of either Murati or Sutskever describing any particular incident. The things they describe are a pattern of behavior, a style, a way of being. Any given instance of it, any individual action, is easy to overlook and impossible to much condemn. It is only in the pattern of many such incidents that things emerge.

Or perhaps one can say, this was an isolated article about one particular incident. It was not a claim that anything else was unimportant, or that this was the central thing going on. And technically I think that is correct? The implication is still clear – if I left with the impression that this was being claimed, I assume other readers did as well.

But was that the actual main issue? Could this have not only not about safety, as I have previously said, but also mostly not about the board? Perhaps the board was mostly trying to prevent key employees from leaving, and thought losing Altman was less disruptive?

What Do We Now Think Happened?

I find the above story hard to believe as a central story. The idea that the board did this primarily to not lose Murati and Sutskever and the exodus they would cause does not really make sense. If you are afraid to lose Murati and Sutskever, I mean that would no doubt suck if it happened, but it is nothing compared to risking getting rid of Altman. Even in the best case, where he does leave quietly, you are going to likely lose a bunch of other valuable people starting with Brockman.

It only makes sense as a reason if it is one of many different provocations, a straw that breaks the camel’s back. In which case, it could certainly have been a forcing function, a reason to act now instead of later.

Another arguments against this being central is that it doesn’t match the board’s explanation, or their (and Mira Murati’s at the time) lack of a further explanation.

It certainly does not match Mira Murati’s current story. A reasonable response is ‘she would say spin this way no matter what, she has to’ but this totally rhymes with the way The New York Times is known to operate. Her story is exactly compatible with NYT operating at the boundaries of the laws of bounded distrust, and using them to paint what they think is a negative picture of a tech company (that they are actively suing) situation:

Mira Murati: Governance of an institution is critical for oversight, stability, and continuity. I am happy that the independent review has concluded and we can all move forward united. It has been disheartening to witness the previous board’s efforts to scapegoat me with anonymous and misleading claims in a last-ditch effort to save face in the media. Here is the message I sent to my team last night. Onward.

Hi everyone,

Some of you may have seen a NYT article about me and the old board. I find it frustrating that some people seem to want to cause chaos as we are trying to move on, but to very briefly comment on the specific claims there:

Sam and I have a strong and productive partnership and I have not been shy about sharing feedback with him directly. I never reached out to the board to give feedback about Sam. However, when individual board members reached out directly to me for feedback about Sam, I provided it-all feedback Sam already knew. That does not in any way mean that I am responsible for or supported the old board’s actions, which I still find perplexing. I fought their actions aggressively and we all worked together to bring Sam back.

Really looking forward to get the board review done and put gossip behind us.

(back to work )

I went back and forth on that question, but yes I do think it is compatible, and indeed we can construct the kind of events that allow on to technically characterize events the way NYT does, and also for Mira Murati to say what she said and not be lying. Or, of course, either side could indeed be lying.

What we do know is that the board tried to fire Sam Altman once, for whatever combination of reasons. That did not work, and the investigation seems to not have produced any smoking guns and won’t bring him down, although the 85% from Gwern that Altman remains in charge doesn’t seem different from what I would have said when the investigation started.

OpenAI comms in general are not under the board’s control. That is clear. That is how this works. Altman gets to do what he wants. If the board does not like it, they can fire him. Except, of course, they can’t do that, not without strong justification, and ‘the comms are not balanced’ won’t cut it. So Altman gives out the comms he wants, tries to raise trillions, and so on.

I read all this as Altman gambling that he can take full effective control again and that the new board won’t do anything about it. He is probably right, at least for now, but also that is the kind of risk Altman runs and game he plays. His strategy has been proven to alienate those around him, to cause trouble, as one would expect if someone was pursuing important goals aggressively and taking risks. Which indeed is what Altman should do, if he believes in what he is doing, you don’t succeed at this level by playing it safe, and you have to play the style and hand you are dealt, but he will doubtless take it farther than is wise. As Gwern puts it, why shouldn’t he, things have always worked out before. People who push envelopes like this learn to keep pushing them until things blow up, they are not scared off for long by close calls.

Altman’s way of being and default strategy is all but designed to not reveal, to him or to us, any signs of trouble unless and until things do blow up. It is not a coincidence that the firing seemed to come out of nowhere the first time. One prediction is that, if Altman does get taken out internally, which I do not expect any time soon, it will once again look like it came out of nowhere.

Another conclusion this reinforces is the need for good communication, the importance of not hiding behind your legal advisors. At the time everyone thought Mira Murati was the reluctant temporary steward of the board and as surprised as anyone, which is also a position she is standing by now. If that was not true, it was rather important to say it was not true.

Then, as we all know, whatever Murati’s initial position was, both Sutskever and Murati ultimately backed Altman. What happened after that? Sutskever remains in limbo months later.

Washington Post: “I love Ilya. I think Ilya loves OpenAI,” [Altman] said, adding that he hopes to work with the AI scientist for many years to come.

As far as we know, Mira Murati is doing fine, and focused on the work, with Sam Altman’s full support.

If those who oppose Altman get shut out, one should note that this is what you would expect. We all know the fate of those who come at the king and miss. Those who think that they can then use the correct emojis, turn on their allies to let the usurper get power, and then they will be spared. You will never be spared for selling out like that.

So watching what happens to Murati is the strongest sign of what Altman thinks happened, as well as whether Murati was indeed ready to leave, which together is likely our best evidence of what happened with respect to Murati.

Altman’s Statement

Sam Altman (on Twitter): I’m very happy to welcome our new board members: Fidji Simo, Sue Desmond-Hellmann, and Nicole Seligman, and to continue to work with Bret, Larry, and Adam.

I’m thankful to everyone on our team for being resilient (a great OpenAI skill!) and staying focused during a challenging time.

In particular, I want to thank Mira for our strong partnership and her leadership during the drama, since, and in all the quiet moments where it really counts. And Greg, who plays a special leadership role without which OpenAI would simply not exist. Being in the trenches always sucks, but it’s much better being there with the two of them. I learned a lot from this experience.

One thing I’ll say now: when I believed a former board member was harming OpenAI through some of their actions, I should have handled that situation with more grace and care. I apologize for this, and I wish I had done it differently. I assume a genuine belief in the crucial importance of getting AGI right from everyone involved.

We have important work in front of us, and we can’t wait to show you what’s next.

This is a gracious statement. 

It also kind of gives the game away in that last paragraph, with its non-apology. Which I want to emphasize that I very much appreciate. This is not fully candid, but it is more candor than we had right to expect, and to that extent it is the good kind of being candid.

Thus, we have confirmation that Sam Altman thought Helen Toner was harming OpenAI through some of her actions, and that he ‘should have handled that situation with more grace and care.’

This seems highly compatible with what I believe happened, which was that he attempted to use an unrelated matter to get her removed from the board, including misrepresenting the views of board members to other board members to try and get this to happen, and that this then came to light. The actions Altman felt were hurting OpenAI were thus presumably distinct from the actions Altman then tried to use to remove her. 

Even if I do not have the details correct there, it seems highly implausible, given this statement, that events related to his issues with Toner were not important factors in the board’s decision to fire Altman.

There is no mention at all of Ilya Sutskever here. It would have been the right place for Altman to put up an olive branch, if he wanted reconciliation so Sutskever could focus on superalignment and keeping us safe, an assignment everyone should want him to have if he is willing. Instead, Altman continues to be silent on this matter.

Helen Toner and Tasha McCauley’s Statement

Whether or not either of them leaked anything to NYT, they issued a public statement as well.

Helen Toner and Tasha McCauley: OpenAl’s mission is to ensure that artificial general intelligence benefits all of humanity. The OpenAl structure empowers the board to prioritize this mission above all else, including business interests.

Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI. We hope the new board does its job in governing OpenAl and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.

There are a great many people doing important work at OpenAl. We wish them and the new board success.

That is not the sentiment of two people convinced everything is fine now. This is a statement that things are very much not fine, and that the investigation did not do its job, and that they lack faith in the newly selected board, although they may be hopeful.

Alas, they continue to be unable or unwilling to share any details, so there is not much more that one can say.

The Case Against Altman

Taking all this at face value, here is the simple case for why all this is terrible:

Jeffrey Ladish: I don’t trust Sam Altman to lead an AGI project. I think he’s a deeply untrustworthy individual, low in integrity and high in power seeking.

It doesn’t bring me joy to say this. I rather like Sam Altman. I like his writing, I like the way he communicates clearly, I like how he strives for what he believes is good.

But I know people who have worked with him. He lies to people, he says things to your face and says another thing behind your back. He has a reputation for this, though people are often afraid to talk about this openly. He is extremely good at what he does. He is extremely good at politics. He schemes to outmaneuver people within his own companies and projects. This is not the kind of person who can be trusted to lead a project that will shape the entire world and the entire future.

Elon Musk: !

Adnan Chaumette: What do you make of 95% of OpenAI staff fully disagreeing with you on this?

We’re talking about wealthy employees with great job prospects anywhere else, so I highly doubt there’s some financial motives for them to side with him as strongly as they did.

Jeffrey Ladish: There was a lot of pressure to sign, so I don’t think the full 95% would disagree with what I said

Also, unfortunately, equity is a huge motivator here. I know many great people at OpenAI and have a huge amount of respect for their safety work. But also, the incentives really suck, many people have a lot of money at stake

I too like Sam Altman, his writing and the way he often communicates. I would add I am a big fan of his medical and fusion efforts. He has engaged for real with the ideas that I consider most important, even if he has a different option I know he takes the concerns seriously. Most of all, I would emphasize: He strives for what he believes is good. Yes, he will doubtless sometimes fool himself, as you are always the easiest one for you to fool, but it is remarkable how many people in his position do not remotely pass these bars.

I too also have very strong concerns that we are putting a person whose highest stats are political maneuvering and deception, who is very high in power seeking, into this position. By all reports, you cannot trust what this man tells you.

I have even stronger concerns about him not having proper strong oversight, capable of reigning in or firing him if necessary, and of understanding what is going on and forcing key decisions. I do not believe this is going to be that board.

There are also a number of very clear specific concerns.

Sam Altman is kind of trying to raise $7 trillion dollars for chips and electrical power generation. This seems to go directly against OpenAI’s mission, and completely undercut the overhang argument for why AGI must be built quickly. You cannot both claim that AGI is inevitable because we have so many chips so need to rush forward before others do, and that we urgently need to build more chips so we can rush forward to have AGI. Or you can, but you’re being disingenuous somewhere, at best. Also the fact that Altman owns the OpenAI venture fund while lacking equity in OpenAI itself is at least rather suspicious.

Sam Altman seems to have lied to board members about the views of other board members in an attempt to take control of the board, and this essentially represents him (potentially) succeeding in doing that. If you lie to the board in an attempt to subvert the board, you must be fired, full stop. I still think this happened.

I also think lying to those around him is a clear pattern of behavior that will not stop, and that threatens to prevent proper raising and handling of key safety concerns in the future. I do not trust a ‘whistleblower process’ to get around this. If Sam Altman operates in a way that prevents communication, and SNAFU applies, that would be very bad for everyone even if Altman wants to proceed safely.

The choice of new board members, to the extent it was influenced by Altman, perhaps reflects an admirable lack of selecting reliable allies (we do not know enough to know, and could reflect no reliable allies that fit the requirements for those slots being available) but it also seems to reflect a lack of desire for effective checks and understanding of the technical and safety problems on the path to AGI. It seems vital to have someone other than Altman on the board with deep technical expertise, and someone who can advocate for technical existential risk concerns. With a seven (or ultimately nine) member board there is room for those people without such factions having control, yet they are seemingly not present.

The 95% rate at which employees signed the letter – a latter that did not commit them to anything at all – is indicative of a combination of factors, primarily the board’s horribly botched communications, and including the considerations that it was a free action whereas not signing was not, and the money they had at stake. It indicates Altman is good at politics, and that he did win over the staff versus the alternative, but it says little about the concerns here.

Here are two recent Roon quotes that seemed relevant, regardless of his intent:

Roon (Member of OpenAI technical staff): Most people are of median moral caliber and i didn’t really recognize that as troubling before.

It’s actually really hard to be a good person and rise above self interest.

In terms of capital structure or economics nerds try to avoid thinking about this with better systems; in the free for all of interpersonal relations there’s no avoiding it.

Steve Jobs was a sociopath who abandoned his daughter for a while and then gaslit anyone who tried to make him take responsibility.

My disagreement is that I think it is exactly better systems of various sorts, not only legal but also cultural norms and other strategies, to mitigate this issue. There’s no avoiding it but you can mitigate, and woe to those who do not do so.

Roon (2nd thread): People’s best qualities are exactly the same as their tragic flaws that’ll destroy them.

This is often why greatness is a transitory phenomenon, especially in young people. Most alpha turns out to be levered beta. You inhabit a new extreme way of existing and reap great rewards; then the Gods punish you because being extreme is obviously risky.

People who invest only in what makes them great will become extremely fragile and blow up.

Sometimes. Other times it is more that the great thing covered up or ran ahead of the flaws, or prevented investment in fixing the flaws. This makes me wonder if I took on, or perhaps am still taking on, insufficient leverage, and taking insufficient risk, at least from a social welfare perspective?

The Case For Altman and What We Will Learn Next

While we could be doing better than Sam Altman, I still do think we could be, and likely would be, instead doing so much worse without him. He is very much ‘above replacement.’ I would take him in a heartbeat over the CEOs of Google and Microsoft, of Meta and Mistral. If I could replace him with Emmett Shear and keep all the other employees, I would do it, but that is not the world we live in.

An obvious test will be what happens to and with the board going forward. There are at least two appointments remaining to get to nine, even if all current members were to stay which seems unlikely. Will we get our non-insider technical expert and our clear safety advocate slash skeptic? How many obvious allies like Brockman will be included? Will we see evidence the new board is actively engaged and involved? And so on.

What happens with Ilya Sutskever matters. The longer he remains in limbo, the worse a sign that is. Ideal would be him back heading superalignment and clearly free to speak about related issues. Short of that, it would be good to see him fully extracted.

Another key test will be whether OpenAI rushes to release a new model soon. GPT-4 has been out for a little over a year. That would normally mean there is still a lot of time left before the next release, but now Gemini and Claude are both roughly on par.

How will Sam Altman respond? 

If they rush a ‘GPT-5’ out the door within a few months, without the type of testing and evaluation they did for GPT-4, then that tells us a lot. 

Sam Altman will soon do an interview with Lex Fridman. Letting someone talk for hours is always insightful, even if as per usual with Lex Fridman they do not get the hard hitting questions, that is not his way. That link includes some questions a hostile interviewer would ask, some of which would also be good questions for Lex to ask in his own style.

What non-obvious things would I definitely ask after some thought, but two or more orders of magnitude less thought than I’d give if I was doing the interview?

  1. I definitely want him to be asked about the seeming contradiction between the overhang argument and the chips project, and about how much of that project is chips versus electricity and other project details.
  2. I’d ask him technical details about their preparedness framework and related issues, to see how engaged he is with that and where his head is landing on such questions. This should include what scary capabilities we might see soon.
  3. I’d ask how he sees his relationship with the new board, how he plans to keep them informed, ensure that they have access to employees and new projects and products, and how they will have input on key decisions short of firing him, and how he plans to address the current lack of technical expertise or safety advocacy. How will OpenAI ensure it is not effectively another commercial business?
  4. I’d ask him about OpenAI’s lobbying especially with regard to the EU AI Act and what they or Microsoft will commit to in the future in terms of not opposing efforts, and how government can help labs be responsible and do the right things.
  5. I’d check his views on potential AI consciousness and how to handle it because it’s good to sanity check there.
  6. I’d ask what he means when he says AGI will come and things will not change much for a while, and to discuss the changes in terminology here generally. What exactly is he envisioning as an AGI when he says that? What type of AGI is OpenAI’s mission, would they then stop there? How does that new world look? Why wouldn’t it lead to something far more capable quickly? Ideally you spend a lot of time here, in cooperative exploratory mode.
  7. To the extent Lex is capable I would ask about all sorts of technical questions, see Dwarkesh’s interviews with Dario Amodei and Demis Hassabis for how to do this well. I would ask about his take on Leike’s plans for Superalignment, and how to address obvious problems such as the ‘AI alignment researcher’ also being a capabilities researcher.
  8. Indeed, I would ask: Are you eager to go sit down with Dwarkesh Patel soon?
  9. I mostly wouldn’t focus on asking what happened with the firing, because I do not expect to be able to get much useful out of him there, but you can try. My view is something like, you would want to pre-negotiate on this issue. If Altman wants to face hostile questioning and is down for it, sure do it, otherwise don’t press.
  10. For Ilya, I would do my best to nail him down specifically on two questions: Is he still employed by OpenAI? And does he have your full faith and confidence, if he chooses to do so, to continue to head up the Superalignment Taskforce?

There is of course so much more, and so much more thinking to do on such questions. A few hours can only scratch the surface, so you have to pick your battles and focus. This is an open invitation to Lex Fridman or anyone else who gets an interview with Altman, if you want my advice or to talk about how to maximize your opportunity, I’m happy to help.

There have been and will be many moments that inform us. Let’s pay attention.

New Comment
1 comment, sorted by Click to highlight new comments since:
[-]bhauth100

I too like Sam Altman, his writing and the way he often communicates. He strives for what he believes is good.

I'm really not sure why you (or other people) say that. I saw, for example, his interview with Lex, and my impression was that he doesn't care about or understand AI safety but he memorized some phrases to appeal to people who don't think too hard about them. Also, that he's a good actor who can make it seem like he cares deeply; it reminded me of watching characters like Sam Carter in SG1 talk about physics.

I would add I am a big fan of his medical and fusion efforts.

I'm not, but that's because I understand the technical details and know those approaches can't work. In any case, my view is that his investments in those were driven largely by appealing to people like you.