The board members urged him to support an interim chief executive. He assured them that he would.
Within hours, Mr. Altman changed his mind and declared war on OpenAI’s board.
I point this out because it is a common theory that Altman was a master of Exact Words and giving implications. That yes he was deceptive and misleading and played power games, but he was too smart to outright say that which was not.
So here he is, saying that which is not.
This does not seem to be a correct conclusion.
This says he promised them to support an interim chief executive. Well, his relationship with Mira quickly turned to be so great, that the board choose to fire her as an interim CEO for being too pro-Altman and to get someone else.
As far as we know, his relationship with Emmett was also cordial at all times during those 55 hours and 32 minutes.
And he did not promise to support the board.
So, in terms of Exact Words, he seems to be in the clear.
You say ‘We have no intention of doing any such thing. The company is perfectly capable of carrying on without Altman. We have every intention of continuing on OpenAI’s mission, led by the existing executive team. Altman promised to help with the transition in the board meeting. If he instead chooses to attempt to destroy OpenAI and its mission, that is his decision. It also proves he was incompatible with our mission and we needed to remove him.’
OpenAI's charter seems consistent with Toner's statement that "The destruction of the company could be consistent with the board’s mission." Here are some quotes:
We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
...
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
...
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
Telling Toner to stay quiet about the charter seems like telling a fire captain to stay quiet about the fact that trainee firefighters may someday need to enter a burning building.
My feeling: It's not Toner's fault that she reminded people about the charter. It's everyone else's fault for staying quiet about it. It's like if on the last day of firefighter training, one of the senior firefighters leading the training said "btw, being a firefighter sometimes means running into a burning building to save someone" and everyone was aghast -- "you're not supposed to mention that! you're gonna upset the trainees and scare them away!"
The entire situation seems a little absurd to me. In my mind, effective firefighter training means psychologically preparing a trainee to enter a burning building from day one. (I actually made a comment about the value of pre-visualization in emergency situations about a year ago.) Maybe OpenAI execs should have been reviewing the charter with employees at every all-hands meeting, psychologically preparing them for the possibility that they might someday need to e.g. destroy the company. It feels unfair to blame Toner that things got to the point they did.
Telling Toner to stay quiet about the charter seems like telling a fire captain to stay quiet about the fact that trainee firefighters may someday need to enter a burning building.
Or perhaps it would be more analogous to firefighting tactics historically, which, in the absence of extremely large mechanical pumps & water sources, frequently forced resorts to making firebreaks, which can involve bans, deliberate destruction, controlled burns, or in the case of fighting fires in cities, razing* entire streets down to the ground (using dynamite if you must).
No one wants to dynamite thousands of innocent peoples' homes, but if you are in the middle of the San Francisco earthquake and there is a city-wide fire raging and all the water pipes are busted (assuming you were lucky enough to be in a time & place that they ever existed)...
* if you like anime, the Short Peace anthology shows this in the "Combustible" short. It's spectacularly animated. Or, by sheer coincidence, I'm watching Promare about sci-fi fire fighting mecha, and the protagonist uses a traditional Japanese firefighting outfit with a hooked spear-like weapon to fight fire-aliens... and the original hook existed, of course, not to duel fire-aliens but to help demolish buildings in cities (whether or not they are on fire yet).
I think this is a good analogy. Though I think "one day you might have to dynamite a bunch of innocent people's homes to keep a fire from spreading, that's part of the job" is a good thing to have in the training if that's the sort of thing that's likely to come up.
Tangent:
AGI is the sweetest, most interesting, most exciting challenge in the world.
We usually concede this point and I don't even think it's true. Of course, even if I'm right, maybe we don't want to push in this direction in dialogue, because it would set the bad precedent of not defending ethics over coolness (and sooner or later something cool will be unethical). But let me discuss anyway.
Of course building AGI is very exciting, it incentivizes some good problem-solving. But doing it through Deep Learning and like OpenAI does has an indeleble undercurrent of "we don't exactly know what's going on, we're just stirring the linear algebra pile". Of course that can already be an incredibly interesting engineering problem, and it's not like you don't need a lot of knowledge and good intuitions to make these hacky things work. But I'm sure the aesthetic predispositions of many (and especially the more mathematically oriented) will line up way better with "actually understanding the thing". From this perspective Alignment, Deep Learning Theory, Decision Theory, understanding value formation, etc. feel fundamentally more interesting intellectual challenges. I share this feeling and I think many other people do. A lot of people have been spoiled by the niceness of math, and/or can't stand the scientific shallowness of ML developments.
At the end of the day it has to work. And you need scale. Someone has to pay for larger scale research.
The best way to achieve the understanding you seek is to first expand the AI industry to hundreds of thousands of people and a trillion in annual r&d and everything has a tpu in it.
Note that historically things like a detailed understanding of aerodynamics were achieved many decades after exploitation. The original aviation pioneers were dead when humans built large enough computers to model aerodynamics. Humans had already built millions of aircraft and optimized to the sr-71.
What actually happens if OpenAI gets destroyed? Presumably most of the former employees get together and work on another AI company, maybe sign up directly w/ Microsoft. And are now utterly polarized against AI Safety.
Reading this post, here's my new galaxy-brain take on Altman:
If we're going to solve AI alignment, Altman is making sure we first have to solve Altman alignment. If we can't even get hyper-optimizing human Altman aligned, we have no hope of aligning AI.
While a fun quip to make, it has little relation to the physical reality. There are technologically possible ways of aligning AIs that have no analogy for aligning humans, and there are approaches to aligning humans that are not available for aligning AIs. Solving one problem, thus, doesn't necessarily have anything to do with solving the other: you can be able to align an AGI without being able to align a flesh-and-blood human, and you could develop the ability to robustly align humans yet end up not one step closer to aligning an AGI.
I mean, I can't say this whole debacle doesn't have a funny allegorical meaning in the context of AI Alignment and OpenAI's chances of achieving it. But it's a funny allegory, not exact correspondence.
Previously: OpenAI: Altman Returns, OpenAI: The Battle of the Board, OpenAI: Facts from a Weekend, additional coverage in AI#41.
We have new stories from The New York Times, from Time, from the Washington Post and from Business Insider.
All paint a picture consistent with the central story told in OpenAI: The Battle of the Board. They confirm key facts, especially Altman’s attempted removal of Toner from the board via deception. We also confirm that Altman promised to help with the transition when he was first fired, so we have at least one very clear cut case of Altman saying that which was not.
Much uncertainty remains, especially about the future, but past events are increasingly clear.
The stories also provide additional color and key details. This post is for those who want that, and to figure out what to think in light of the new details.
The most important new details are that NYT says that the board proposed and was gung ho on Brett Taylor, and says D’Angelo suggested Summers and grilled Summers together with Altman before they both agreed to him as the third board member. And that the new board is remaining quiet while it investigates, echoing the old board, and in defiance of the Altman camp and its wish to quickly clear his name.
The New York Times Covers Events
The New York Times finally gives its take on what happened, by Tripp Mickle, Mike Isaac, Karen Weise and the infamous Cade Metz (so treat all claims accordingly).
As with other mainstream news stories, the framing is that Sam Altman won, and this shows the tech elite and big money are ultimately in charge. I do not see that as an accurate description what happened or its implications, yet both the tech elite and its media opponents want it to be true and are trying to make it true through the magician’s trick of saying that it is true, because often power resides where people believe it resides.
I know that at least one author did read my explanations of events, and also I talked to a Times reporter not on the byline to help make everything clear, so they don’t have the excuse that no one told them. Didn’t ultimately matter.
Paul Graham is quoted as saying Altman is drawn to power more than money, as an explanation for why Altman would work on something that does not make him richer. I believe Graham on this, but also I think there are at least three damn good other reasons to do it, making the decision overdetermined.
Pretty much every version of Altman I can imagine would want to be doing this.
The key description of the safety issue is structured in a way that it is easy to come away thinking this was a concern of the outside board members, but both in reality and if you read the article carefully, this applies to the entire board (although we have some uncertainty about Brockman in particular):
Remember that this was and is the explicit goal of OpenAI, to safely create AI more intelligent than humans, also known as AGI. Altman signed the CAIS letter, although Brockman is not known to have done so. Altman has made the threat here very clear. Everyone involved understands the danger. Everyone is, to their credit, talking price.
The first piece of news is that we have at least one case in which we can be damn sure that Sam Altman lied to the board, in at least some important senses.
I point this out because it is a common theory that Altman was a master of Exact Words and giving implications. That yes he was deceptive and misleading and played power games, but he was too smart to outright say that which was not.
So here he is, saying that which is not.
Did it matter? Maybe no. But maybe quite a lot, actually. This cooperation could have been a key factor driving the decision not to detail the issues with Altman, at least initially, when it would have worked. If Altman is going to cooperate, what he gets in return is the mission continues and also whatever he did gets left unspecified.
The article waffles on whether or not Altman actually did declare war on the board that night. The statement above says so. Then they share a narrative of others driving the revolt, including Airbnb’s CEO Brian Chesky, the executives and employees, with Altman only slowly deciding to fight back.
It can’t be both. Which is it?
I assume the topline is correct. That Altman was fighting back the whole time. And that despite being willing to explicitly say that up top, Altman’s people sufficiently sculpted the media narrative to make the rest sound like events unfolded in a very different way. It is an absolute master class in narrative sculpting and media manipulation. They should teach this in universities. Chef’s kiss.
We have confirmation that Altman was not ‘consistently candid’ about the project to build chips in the UAE:
For many obvious reasons, this is an area where the board would want to be informed, and any reasonable person in Altman’s position would know this, and norms say that this means they should be informed. But not informing them would not by default strictly violate the rules, as long as Altman honestly answered questions when asked. Did he, and to what extent? We don’t know.
Now we get into some new material.
This frames Sutskever as having been in favor of firing Altman for some time. If this is true, the board’s sense of urgency, and its unwillingness to take time to plan and get its ducks in a row, makes even less sense. If they had been discussing the issue for months, if Ilya had been not only onboard but enthusiastic for a month, I don’t get it.
The post then goes over the incident over Toner’s ignored academic paper, for which Toner agreed to apologize to keep the peace.
We’re definitely not. Toner and I are on the page that this was trivial and obviously so. Altman was presenting it as a major deal.
Now we get to the core issue.
Time magazine gives this version:
Whatever other reasons did or did not exist, if Altman did say that, my model of such things is that he needed to be fired and it was the board’s job to fire him. And the board really should have said so, rather than speaking in generalities.
Multiple witnesses are saying to NYT that he said it. Altman denies it.
It seems clear Altman did use private conversations with board members to give false impressions and drum up support for getting Toner off the board, thereby giving Altman board control, using the paper as an excuse. The dispute is whether Altman did it using Exact Words, or whether he lied. Altman called his attempt ‘ham fisted’ which I believe is power player code for ‘got caught lying’ but could also apply to ‘got caught technically-not-lying while implicitly lying my ass off.’
NYT does seem to be saying the board did step up their description a bit:
Use of the word ‘lied’ is an escalation. And this is a clear confirmation of lawyers.
We also have confirmation of zero PR people, because we have Toner’s infamous line. I know the logic behind it but I still cannot believe that she actually said it out loud given the context, seriously WTF:
You say ‘We have no intention of doing any such thing. The company is perfectly capable of carrying on without Altman. We have every intention of continuing on OpenAI’s mission, led by the existing executive team. Altman promised to help with the transition in the board meeting. If he instead chooses to attempt to destroy OpenAI and its mission, that is his decision. It also proves he was incompatible with our mission and we needed to remove him.’
This sounds highly contingent.
Also the board had now already made an explicit bluff threatening to quit. The board called. The executives did not quit. Subsequent such threats become far less credible.
Skipping ahead a bit, they still tried this a second time.
Pro negotiation tip: Do not quickly pull this trick a second time once your first bluff gets called. It will not work. That is why you do not rush out the first bluff, and instead wait until your position is stronger.
Of course the board called the second bluff, appointing Emmett Shear.
The next piece of good information came before that deadline was set, which is that Bret Taylor was actually seen as a fair arbiter approved by both sides rather than being seen as in the Altman camp.
But also note that in this telling, it was the board that wanted concessions and in particular new board members rather than Altman. That directly contradicts other reports and does not make sense, unless you read it as ‘contingent on the old board agreeing to resign, they wanted concessions.’ As in, the board was going to hand over its control of OpenAI, and they wanted the concession of ‘we agree on who we give it to, and what those people agree will happen.’ At best, I find this framing bizarre.
Larry Summers was a suggestion of D’Angelo, in some key original reporting:
So both sides talked to Summers, and were satisfied with his answers.
Overall this all makes me bullish on the new board. We might be in a situation with, essentially, D’Angelo and two neutral arbiters, albeit ones with gravitas and business connections. They are not kowtowing to Altman. Altman’s camp continues to fume (and somehow texts from Conway to Altman about it are leaking to NYT, theere are not many places that can come from).
Gwern offers their summary here.
Time Makes Altman CEO of the Year
Time profiled Altman, calling him ‘CEO of the year,’ a title he definitely earned. I think this is the best very short description so far, nailing the game theory:
Sources also note the side of Altman that seeks power, and is willing to be dishonest and manipulative in order to get it.
This is the first mainstream report that correctly identifies the outcome as unclear:
In addition to his other good picks, we can add… Georgist land taxes? Woo-hoo!
That is the kind of signal no one ever fakes. There really is a lot to love.
Including his honesty. I don’t want to punish it, but also I want to leave this here.
Time describes the board’s initial outreach to Altman this way:
I am not saying we know for sure that this is another case of Altman lying (to Time rather than the board, a much less serious matter), but his version of events does not compute. If the board was actively asking for Altman to outright return, I do not buy that this was Altman’s reaction.
I could buy either half of Altman’s story: That Altman was asked to return, or that Altman was defiant to the board’s request and only did it out of duty and obligation (because the board was initially requesting something else.) I don’t buy both at once. It is entirely inconsistent with Paul Graham’s assessment of his character.
Washington Post Says Leaders Warned Altman was Abusive
The WaPo piece says that in the fall the board was approached by a small number of senior leaders at OpenAI, with concerns about Altman. In this telling, the board thought OpenAI stood to lose key leaders due to what they saw as Altman’s toxicity.
That is always true. No large group is ever fully united, no matter what emojis says.
There are few concrete details. What details are offered sound like ordinary things that happen at a company. What is and is not abusive, in such a high-pressure and competitive environment, is in the eye of the beholder and highly context dependent. What is described here could reflect abuse, or it could reflect nothing of concern. What is concerning is that employees found it concerning enough to go to the board.
Beyond the one concrete detail of mangers going to the board with such complaints, this did not teach us much. It seems like those concerns helped confirm the board’s model of Altman’s behavior, and helped justify the decision on the margin.
Business Insider Says Microsoft Letter Was a Bluff
Business Insider says that OpenAI employees really, really did not want to go to work at Microsoft. I wouldn’t either. The employees might have largely still seen it as the least bad alternative under some circumstances, if Altman didn’t want to start a new company. And remember, the letter said they ‘might’ do it, not that they all definitely would.
AI Safety Memes offers the following quotes:
Roon responds that this was not accurate:
I trust that Roon is giving his honest recollection of his experience here. I also believe the two stories are more compatible than he realizes.
The employees wanted Altman back, or barring that a board and CEO that could trust, without which they would leave, but they mostly wanted OpenAI intact, and ideally to get paid, and were furious with the board. They didn’t want to go to Microsoft, we will never know how many would have actually done it versus stayed versus gone elsewhere or founded new companies, or how long the board had before that time bomb went off in earnest. My guess is that a lot of employees go to Microsoft if Altman stays there, but a lot also choose other paths.
My guess is that both the majority of employees enthusiastically signed the letter, and also those who didn’t want to sign felt pressured to do so anyway and this got a bunch of the later signatures onboard. I know I would have felt pressured even if no one applied any pressure intentionally.
Wei Dei sees the employee actions at OpenAI and the signing of the petition as a kind of OpenAI cultural revolution that he did not think was possible at a place like that, and sees it as a huge negative update. I was less surprised, and also read less into the letter. There was good reason, from their perspective, to be outraged and demand Altman’s return. There was also good reason to sign the letter even if an individual employee did not support Altman – to hold the company together, and for internal political reasons even if no direct pressure was applied. Again, the letter said ‘may’ leave, so it did not commit you to anything.
Where Does That All Leave Us?
I will continue to link people to The Battle of the Board, which I believe remains the definitive synthesis of events. We now have additional detail supporting and fleshing out that narrative, but they do not alter the central story.
I am sure I will continue to often have a weekly section on developments, but hopefully things will slow down from here.
As I wrote previously, we now await the board’s investigation, and the composition of the new board. If the new board has a clear majority with a strong commitment to existential safety and the mission, and has the gravitas and experience necessary to do the job of the board, that would be a very good outcome, and if he did not do anything to render it impossible I would be happy to see Altman stay under such supervision.
If that proves not possible after an investigation, we will see who we get instead, I worry it will not be better but I also expect the company to then hold together, in a way it would not have if the board had not compromised, given how things had gone.
If the board instead ends up effectively captured by business interests and those who do not care about safety or OpenAI’s stated mission, that would be a catastrophe, whether or not Altman is retained.
If Altman ends up with effective board control and has free reign, then that is a highly worrisome outcome, and we get to find out to what extent Altman is truly aligned, wise and capable of resisting certain aspects his nature, versus the temptation to build and scale and seek power. It could end up fine, or be disastrous.