I too like Sam Altman, his writing and the way he often communicates. He strives for what he believes is good.
I'm really not sure why you (or other people) say that. I saw, for example, his interview with Lex, and my impression was that he doesn't care about or understand AI safety but he memorized some phrases to appeal to people who don't think too hard about them. Also, that he's a good actor who can make it seem like he cares deeply; it reminded me of watching characters like Sam Carter in SG1 talk about physics.
I would add I am a big fan of his medical and fusion efforts.
I'm not, but that's because I understand the technical details and know those approaches can't work. In any case, my view is that his investments in those were driven largely by appealing to people like you.
It is largely over.
The investigation into events has concluded, finding no wrongdoing anywhere.
The board has added four new board members, including Sam Altman. There will still be further additions.
Sam Altman now appears firmly back in control of OpenAI.
None of the new board members have been previously mentioned on this blog, or known to me at all.
They are mysteries with respect to AI. As far as I can tell, all three lack technical understanding of AI and have no known prior opinions or engagement on topics of AI, AGI and AI safety of any kind including existential risk.
Microsoft and investors indeed so far have came away without a seat. They also, however, lack known strong bonds to Altman, so this is not obviously a board fully under his control if there were to be another crisis. They now have the gravitas the old board lacked. One could reasonably expect the new board to be concerned with ‘AI Ethics’ broadly construed in a way that could conflict with Altman, or with diversity, equity and inclusion.
One must also remember that the public is very concerned about AI existential risk when the topic is brought up, so ‘hire people with other expertise that have not looked at AI in detail yet’ does not mean the new board members will dismiss such concerns, although it could also be that they were picked because they don’t care. We will see.
Prior to the report summary and board expansion announcements, The New York Times put out an article leaking potentially key information, in ways that looked like an advance leak from at least one former board member, claiming that Mira Murati and Ilya Sutskever were both major sources of information driving the board to fire Sam Altman, while not mentioning other concerns. Mira Murati has strongly denied these claims and has the publicly expressed confidence and thanks of Sam Altman.
I continue to believe that my previous assessments of what happened were broadly accurate, with new events providing additional clarity. My assessments were centrally offered in OpenAI: The Battle of the Board, which outlines my view of what happened. Other information is also in OpenAI: Facts From a Weekend and OpenAI: Altman Returns.
This post covers recent events, completing the story arc for now. There remain unanswered questions, in particular what will ultimately happen with Ilya Sutskever, and the views and actions of the new board members. We will wait and see.
The New Board
The important question, as I have said from the beginning, is: Who is the new board?
We have the original three members, plus four more. Sam Altman is one very solid vote for Sam Altman. Who are the other three?
This tells us who they are in some senses, and nothing in other important senses.
I did some quick investigation, including asking multiple LLMs and asking Twitter, about the new board members and the implications.
It is not good news.
The new board clearly looks like it represents an attempt to pivot:
I have nothing against any of these new board members. But neither do I have anything for them, either. I have perhaps never previously heard their names.
We do have this tiny indirect link to work with, I suppose, although it is rather generic praise indeed:
Otherwise this was the most positive thing anyone had to say overall, no one had anything more detailed that this in any direction:
The pivot away from any technological domain knowledge, towards people who know other areas instead, is the most striking. There is deep expertise in non-profits and big corporations, with legal and regulatory issues, with major technologies and so on. But these people (presumably) don’t know AI, and seem rather busy in terms of getting up to speed on what they may or may not realize is their most important job they will ever have. I don’t know their views on existential risk because none of them have, as far as I know, expressed such views at all. That seems not to be a concern here at all.
Contrast this with a board that included Toner, Sutskever and Brockman along with Altman. The board will not have anyone on it that can act as a sanity check on technical claims or risk assessments from Altman, perhaps D’Angelo will be the closest thing left to that. No one will be able to evaluate claims, express concerns, and ensure everyone has the necessary facts. It is not only Microsoft that got shut out.
So this seems quite bad. In a normal situation, if trouble does not find them, they will likely let Altman do whatever he wants. However, with only Altman as a true insider, if trouble does happen then the results will be harder to predict or control. Altman has in key senses won, but from his perspective he should worry he has perhaps unleashed a rather different set of monsters.
This is the negative case in a nutshell:
And there was also this in response to the query in question:
Day to day, Altman has a free hand, these are a bunch of busy business people.
However, that is not the important purpose of the board. The board is not there to give you prestige or connections. Or rather, it is partly for that, but that is the trap that prestige and connections lay for us.
The purpose of the board is to control the company. The purpose of the board is to decide whether to fire the CEO, and to choose future iterations of the board.
The failure to properly understand this before is part of how things got to this point. If the same mistake is repeating itself, then so be it.
The board previously intended to have a final size of nine members. Early indications are that the board is likely to further expand this year.
I would like to see at least one person with strong technical expertise other than Sam Altman, and at least one strong advocate for existential risk concerns.
Altman would no doubt like to see Brockman come back, and to secure those slots for his loyalists generally as soon as possible. A key question is whether this board lets him do that.
One also notes that they appointed three women on International Women’s Day.
The Investigation Probably Was Not Real
Everyone is relieved to put this formality behind them, Sam Altman most of all. Even if the investigation was never meant to go anywhere, it forced everyone involved to be careful. That danger has now passed.
Indeed, here is their own description of the investigation, in full, bold is mine:
And consider this tidbit from The Washington Post:
If Altman is doing the types of things people say he is doing, and you do not offer a confidential way to share relevant information, that tells me you are not so interested in finding wrongdoing by Sam Altman.
Taken together with an inability to offer confidentiality, it would be difficult for a summary statement to louder scream the message ‘we wanted this all to go away quietly and had no interest in a real investigation if we can avoid one.’ There was still the possibility that one could not be avoided, if the gun was sufficiently openly smoking. That turned out not to be the case. So instead, they are exonerating the board in theory, saying they messed up in practice, and moving on.
One detail that I bolded is that the board did not anticipate that firing Altman would destabilize OpenAI, that they thought he would not fight back. If true, then in hindsight this looks like a truly epic error on their part.
But what if it wasn’t, from their perspective at the time? Altman had a very real choice to make.
Altman decided to fight back in a way rarely seen, and in a way he did not do at YC when he was ejected there. He won, but there was real risk that OpenAI could have fallen apart. He was counting on the board both to botch the fight, and then to cave rather than let OpenAI potentially fail. And yes, he was right, but that does not mean it wasn’t a gamble, or that he was sure to make it.
I still consider this a massive error by the board, for five reasons.
Yes, some of that is of course hindsight. There were still many reasons to realize this was a unique situation, where Altman would be uniquely poised to fight.
Early indications are that we are unlikely to see the full report this year.
The New York Times Leak and Gwern’s Analysis of It
Prior to the release of the official investigation results and announcement of expansion of the board, the New York Times reported new information, including some that seems hard to not have come from at least one board member. Gwern then offered perspective analyzing what it meant that this article was published while the final report was not yet announced.
Gwern’s take was that Altman had previously had a serious threat to worry about with the investigation, it was not clear he would be able to retain control. He was forced to be cautious, to avoid provocations.
We discussed this a bit, and I was convinced that Altman had more reason than I realized to be worried about this at the time. Even though Summers and Taylor were doing a mostly fake investigation, it was only mostly fake, and you never know what smoking guns might turn up. Plus, Altman could not be confident it was mostly fake until late in the game, because if Summers and Taylor were doing a real investigation they would have every reason not to tip their hand. Yes, no one could talk confidentially as far as Altman knew, but who knows what deals could be struck, or who might be willing to risk it anyway?
That, Gwern reported, is over now. Altman will be fully in charge. Mira Muratori (who we now know was directly involved, not simply someone initially willing to be interim CEO) and Ilya Sutskever will have their roles reduced or leave. The initial board might not formally be under Altman’s control but will become so over time.
I do not give the investigation as much credit for being a serious threat as Gwern does, but certainly it was a good reason to exercise more caution in the interim to ensure that remained true. Also Mira Murati has denied the story, and has at least publicly retained the confidence and thanks of Altman.
Here is Gwern’s full comment:
That continues to be the actual key question in practice. Who controls the final board, and to what extent will that board be in charge?
We now know who that board will be, suggesting the answer is that Altman may not be in firm control of the board, but the board is likely to give him a free hand to do what he wants. For all practical purposes, Altman is back in charge until something happens.
The other obvious question is: What actually happened? What did Sam Altman do (or not do)? Why did the board try to fire Altman in the first place?
Whoever leaked to the Times, and their writers, have either a story or a theory.
You know what does not come up in the article? Any mention of AI safety or existential risk, any mention of Effective Altruism, or any member of the board members other than Ilya, including Helen Toner, or any attempt by Altman to alter the board. This is all highly conspicuous by its absence, even the absence of any note of its absence.
There is also no mention of either Murati or Sutskever describing any particular incident. The things they describe are a pattern of behavior, a style, a way of being. Any given instance of it, any individual action, is easy to overlook and impossible to much condemn. It is only in the pattern of many such incidents that things emerge.
Or perhaps one can say, this was an isolated article about one particular incident. It was not a claim that anything else was unimportant, or that this was the central thing going on. And technically I think that is correct? The implication is still clear – if I left with the impression that this was being claimed, I assume other readers did as well.
But was that the actual main issue? Could this have not only not about safety, as I have previously said, but also mostly not about the board? Perhaps the board was mostly trying to prevent key employees from leaving, and thought losing Altman was less disruptive?
What Do We Now Think Happened?
I find the above story hard to believe as a central story. The idea that the board did this primarily to not lose Murati and Sutskever and the exodus they would cause does not really make sense. If you are afraid to lose Murati and Sutskever, I mean that would no doubt suck if it happened, but it is nothing compared to risking getting rid of Altman. Even in the best case, where he does leave quietly, you are going to likely lose a bunch of other valuable people starting with Brockman.
It only makes sense as a reason if it is one of many different provocations, a straw that breaks the camel’s back. In which case, it could certainly have been a forcing function, a reason to act now instead of later.
Another arguments against this being central is that it doesn’t match the board’s explanation, or their (and Mira Murati’s at the time) lack of a further explanation.
It certainly does not match Mira Murati’s current story. A reasonable response is ‘she would say spin this way no matter what, she has to’ but this totally rhymes with the way The New York Times is known to operate. Her story is exactly compatible with NYT operating at the boundaries of the laws of bounded distrust, and using them to paint what they think is a negative picture of a tech company (that they are actively suing) situation:
I went back and forth on that question, but yes I do think it is compatible, and indeed we can construct the kind of events that allow on to technically characterize events the way NYT does, and also for Mira Murati to say what she said and not be lying. Or, of course, either side could indeed be lying.
What we do know is that the board tried to fire Sam Altman once, for whatever combination of reasons. That did not work, and the investigation seems to not have produced any smoking guns and won’t bring him down, although the 85% from Gwern that Altman remains in charge doesn’t seem different from what I would have said when the investigation started.
OpenAI comms in general are not under the board’s control. That is clear. That is how this works. Altman gets to do what he wants. If the board does not like it, they can fire him. Except, of course, they can’t do that, not without strong justification, and ‘the comms are not balanced’ won’t cut it. So Altman gives out the comms he wants, tries to raise trillions, and so on.
I read all this as Altman gambling that he can take full effective control again and that the new board won’t do anything about it. He is probably right, at least for now, but also that is the kind of risk Altman runs and game he plays. His strategy has been proven to alienate those around him, to cause trouble, as one would expect if someone was pursuing important goals aggressively and taking risks. Which indeed is what Altman should do, if he believes in what he is doing, you don’t succeed at this level by playing it safe, and you have to play the style and hand you are dealt, but he will doubtless take it farther than is wise. As Gwern puts it, why shouldn’t he, things have always worked out before. People who push envelopes like this learn to keep pushing them until things blow up, they are not scared off for long by close calls.
Altman’s way of being and default strategy is all but designed to not reveal, to him or to us, any signs of trouble unless and until things do blow up. It is not a coincidence that the firing seemed to come out of nowhere the first time. One prediction is that, if Altman does get taken out internally, which I do not expect any time soon, it will once again look like it came out of nowhere.
Another conclusion this reinforces is the need for good communication, the importance of not hiding behind your legal advisors. At the time everyone thought Mira Murati was the reluctant temporary steward of the board and as surprised as anyone, which is also a position she is standing by now. If that was not true, it was rather important to say it was not true.
Then, as we all know, whatever Murati’s initial position was, both Sutskever and Murati ultimately backed Altman. What happened after that? Sutskever remains in limbo months later.
As far as we know, Mira Murati is doing fine, and focused on the work, with Sam Altman’s full support.
If those who oppose Altman get shut out, one should note that this is what you would expect. We all know the fate of those who come at the king and miss. Those who think that they can then use the correct emojis, turn on their allies to let the usurper get power, and then they will be spared. You will never be spared for selling out like that.
So watching what happens to Murati is the strongest sign of what Altman thinks happened, as well as whether Murati was indeed ready to leave, which together is likely our best evidence of what happened with respect to Murati.
Altman’s Statement
This is a gracious statement.
It also kind of gives the game away in that last paragraph, with its non-apology. Which I want to emphasize that I very much appreciate. This is not fully candid, but it is more candor than we had right to expect, and to that extent it is the good kind of being candid.
Thus, we have confirmation that Sam Altman thought Helen Toner was harming OpenAI through some of her actions, and that he ‘should have handled that situation with more grace and care.’
This seems highly compatible with what I believe happened, which was that he attempted to use an unrelated matter to get her removed from the board, including misrepresenting the views of board members to other board members to try and get this to happen, and that this then came to light. The actions Altman felt were hurting OpenAI were thus presumably distinct from the actions Altman then tried to use to remove her.
Even if I do not have the details correct there, it seems highly implausible, given this statement, that events related to his issues with Toner were not important factors in the board’s decision to fire Altman.
There is no mention at all of Ilya Sutskever here. It would have been the right place for Altman to put up an olive branch, if he wanted reconciliation so Sutskever could focus on superalignment and keeping us safe, an assignment everyone should want him to have if he is willing. Instead, Altman continues to be silent on this matter.
Helen Toner and Tasha McCauley’s Statement
Whether or not either of them leaked anything to NYT, they issued a public statement as well.
That is not the sentiment of two people convinced everything is fine now. This is a statement that things are very much not fine, and that the investigation did not do its job, and that they lack faith in the newly selected board, although they may be hopeful.
Alas, they continue to be unable or unwilling to share any details, so there is not much more that one can say.
The Case Against Altman
Taking all this at face value, here is the simple case for why all this is terrible:
I too like Sam Altman, his writing and the way he often communicates. I would add I am a big fan of his medical and fusion efforts. He has engaged for real with the ideas that I consider most important, even if he has a different option I know he takes the concerns seriously. Most of all, I would emphasize: He strives for what he believes is good. Yes, he will doubtless sometimes fool himself, as you are always the easiest one for you to fool, but it is remarkable how many people in his position do not remotely pass these bars.
I too also have very strong concerns that we are putting a person whose highest stats are political maneuvering and deception, who is very high in power seeking, into this position. By all reports, you cannot trust what this man tells you.
I have even stronger concerns about him not having proper strong oversight, capable of reigning in or firing him if necessary, and of understanding what is going on and forcing key decisions. I do not believe this is going to be that board.
There are also a number of very clear specific concerns.
Sam Altman is kind of trying to raise $7 trillion dollars for chips and electrical power generation. This seems to go directly against OpenAI’s mission, and completely undercut the overhang argument for why AGI must be built quickly. You cannot both claim that AGI is inevitable because we have so many chips so need to rush forward before others do, and that we urgently need to build more chips so we can rush forward to have AGI. Or you can, but you’re being disingenuous somewhere, at best. Also the fact that Altman owns the OpenAI venture fund while lacking equity in OpenAI itself is at least rather suspicious.
Sam Altman seems to have lied to board members about the views of other board members in an attempt to take control of the board, and this essentially represents him (potentially) succeeding in doing that. If you lie to the board in an attempt to subvert the board, you must be fired, full stop. I still think this happened.
I also think lying to those around him is a clear pattern of behavior that will not stop, and that threatens to prevent proper raising and handling of key safety concerns in the future. I do not trust a ‘whistleblower process’ to get around this. If Sam Altman operates in a way that prevents communication, and SNAFU applies, that would be very bad for everyone even if Altman wants to proceed safely.
The choice of new board members, to the extent it was influenced by Altman, perhaps reflects an admirable lack of selecting reliable allies (we do not know enough to know, and could reflect no reliable allies that fit the requirements for those slots being available) but it also seems to reflect a lack of desire for effective checks and understanding of the technical and safety problems on the path to AGI. It seems vital to have someone other than Altman on the board with deep technical expertise, and someone who can advocate for technical existential risk concerns. With a seven (or ultimately nine) member board there is room for those people without such factions having control, yet they are seemingly not present.
The 95% rate at which employees signed the letter – a latter that did not commit them to anything at all – is indicative of a combination of factors, primarily the board’s horribly botched communications, and including the considerations that it was a free action whereas not signing was not, and the money they had at stake. It indicates Altman is good at politics, and that he did win over the staff versus the alternative, but it says little about the concerns here.
Here are two recent Roon quotes that seemed relevant, regardless of his intent:
My disagreement is that I think it is exactly better systems of various sorts, not only legal but also cultural norms and other strategies, to mitigate this issue. There’s no avoiding it but you can mitigate, and woe to those who do not do so.
Sometimes. Other times it is more that the great thing covered up or ran ahead of the flaws, or prevented investment in fixing the flaws. This makes me wonder if I took on, or perhaps am still taking on, insufficient leverage, and taking insufficient risk, at least from a social welfare perspective?
The Case For Altman and What We Will Learn Next
While we could be doing better than Sam Altman, I still do think we could be, and likely would be, instead doing so much worse without him. He is very much ‘above replacement.’ I would take him in a heartbeat over the CEOs of Google and Microsoft, of Meta and Mistral. If I could replace him with Emmett Shear and keep all the other employees, I would do it, but that is not the world we live in.
An obvious test will be what happens to and with the board going forward. There are at least two appointments remaining to get to nine, even if all current members were to stay which seems unlikely. Will we get our non-insider technical expert and our clear safety advocate slash skeptic? How many obvious allies like Brockman will be included? Will we see evidence the new board is actively engaged and involved? And so on.
What happens with Ilya Sutskever matters. The longer he remains in limbo, the worse a sign that is. Ideal would be him back heading superalignment and clearly free to speak about related issues. Short of that, it would be good to see him fully extracted.
Another key test will be whether OpenAI rushes to release a new model soon. GPT-4 has been out for a little over a year. That would normally mean there is still a lot of time left before the next release, but now Gemini and Claude are both roughly on par.
How will Sam Altman respond?
If they rush a ‘GPT-5’ out the door within a few months, without the type of testing and evaluation they did for GPT-4, then that tells us a lot.
Sam Altman will soon do an interview with Lex Fridman. Letting someone talk for hours is always insightful, even if as per usual with Lex Fridman they do not get the hard hitting questions, that is not his way. That link includes some questions a hostile interviewer would ask, some of which would also be good questions for Lex to ask in his own style.
What non-obvious things would I definitely ask after some thought, but two or more orders of magnitude less thought than I’d give if I was doing the interview?
There is of course so much more, and so much more thinking to do on such questions. A few hours can only scratch the surface, so you have to pick your battles and focus. This is an open invitation to Lex Fridman or anyone else who gets an interview with Altman, if you want my advice or to talk about how to maximize your opportunity, I’m happy to help.
There have been and will be many moments that inform us. Let’s pay attention.