The key news today: Altman had attacked Helen Toner https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html (HN, Zvi; excerpts) Which explains everything if you recall board structures and voting.
Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.
As always, "one story is good, until another is told"...
The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.
https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c (excerpts)
...The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.
Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the rele
The NYer has confirmed that Altman's attempted coup was the cause of the hasty firing (excerpts; HN):
......Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)
...His tactical skills w
I left a comment over on EAF which has gone a bit viral, describing the overall picture of the runup to the firing as I see it currently.
The summary is: evaluations of the Board's performance in firing Altman generally ignore that Altman made OpenAI and set up all of the legal structures, staff, and the board itself; the Board could, and should, have assumed good faith of Altman because if he hadn't been sincere, why would he have done all that, proving in extremely costly and unnecessary ways his sincerity? But, as it happened, OA recently became such a success that Altman changed his mind about the desirability of all that and now equally sincerely believes that the mission requires him to be in total control; and this is why he started to undermine the board. The recency is why it was so hard for them to realize that change of heart or develop common knowledge about it or coordinate to remove him given his historical track record - but that historical track record was also why if they were going to act against him at all, it needed to be as fast & final as possible. This led to the situation becoming a powder keg, and when proof of Altman's duplicity in the Toner firing became undeniable to the Board, it exploded.
Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece "2023 CEO of the Year: Sam Altman" (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie - cute coyote photos!), it actually contains a fair bit of leaking I haven't seen anywhere else. Most strikingly:
claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:
The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.
(The wording here strongly implies it was not Sutskever.) This of course greatly undermines the "incompetent Board" narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn't inform Altman ahead of time (was she one of tho
If you've noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl
and never as @roon
, it's because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.
Today the Washington Post adds to the pile, "Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating delays at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO" (archive.is; HN; excerpts), which confirms the Time/WSJ reporting about executives approaching the board with concerns about Altman, and adds on more details - their concerns did not relate to the Toner dispute, but apparently was about regular employees:
...This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman. Altman---a revered mentor, prodigious start-up investor and avatar of the AI revolution---had been psychologically abusive, the employees said, creating pockets of chaos and de
An elaboration on the WaPo article in the 2023-12-09 NYT: “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence: Split over the Leadership of Sam Altman, Board Members and Executives Turned on One Another. Their Brawl Exposed the Cracks at the Heart of the AI Movement” (excerpts). Mostly a gossipy narrative from both the Altman & D'Angelo sides, so I'll just copy over my HN comment:
another reporting of internal OA complaints about Altman's manipulative/divisive behavior, see previously on HN
previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.
Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.
Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so a
The WSJ dashes our hopes for a quiet Christmas by dropping on Christmas Eve a further extension of all this reporting: "Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends: The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights", Seetharam et al 2024-12-24 (Archive.is, HN; annotated excerpts).
This article confirms - among other things - what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.
Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the 'senior employees' to (twice!) appeal to the board
An OA update: it's been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep.)
To recap, because I believe I haven't been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone's relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close - and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it's both irrelevant & unimpo...
Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati's desperate-sounding internal note)
Mira Murati announced today she is resigning from OA. (I have also, incidentally, won a $1k bet with an AI researcher on this prediction.)
See my earlier comments on 23 June 2024 about what 'OA rot' would look like; I do not see any revisions necessary given the past 3 months.
As for Murati finally leaving (perhaps she was delayed by the voice shipping delays), I don't think it matters too much as far as I could tell (not like Sutskever or Brockman leaving), she was competent but not critical; probably the bigger deal is that her leaving is apparently a big surprise to a lot of OAers (maybe I should've taken more bets?), and so will come as a blow to morale and remind people of last year's events.
EDIT: Zoph Barret & Bob McGrew are now gone too. Altman has released a statement, confirming that Murati only quit today:
......When Mira [Murati] informed me this morning that she was leaving, I was saddened but of course support her decision. For the past year, she has been building out a strong bench of leaders that will continue our progress.
I also want to share that Bob [McGrew] and Barret [Zoph] have decided to depart OpenAI. Mira, Bob, and Barret made these decisions independently of each other and amicably, but the timing of Mira’s decision was such that it made sense to now do this all at once, so that we can work t
Of course it doesn't make sense. It doesn't have to. It just has to be a face-saving excuse for why she pragmatically told him at the last possible minute. (Also, it's not obvious that the equity round hasn't basically closed.)
At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn't dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently 'independent' and 'diverse' directors to pad out the board and eventually be replaced by full Altman loyalists - although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)
The official OA press releases are out confirming The Information: https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai https://openai.com/blog/openai-announces-new-members-to-board-of-directors
“I’m pleased this whole thing is over,” Altman said at a press conference Friday.
He's probably right.
As predicted, the full report will not be released, only the 'summary' focused on exonerating Altman. Also as predicted, 'the mountain has given birth to a mouse' and the report was narrowly scoped to just the firing: they bluster about "reviewing 30,000 documents" (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at "the events concerning the November 17, 2023 removal" and interviewed hardly anyone ("dozens of interviews" barely even covers the immediate dramatis personae, much less any kind of investigation into Altman's chip stuff, Altman's many broken promises, Brockman's complainers etc). Doesn't sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it... It also seems like they indeed did not promise confidentiality or set up any kind of ...
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas.
The Superalignment compute-quota flashpoint is now confirmed. Aside from Jan Leike explicitly calling out compute-quota shortages post-coup (which strictly speaking doesn't confirm shortages pre-coup), Fortune is now reporting that this was a serious & longstanding issue:
......According to a half-dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.
Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.
The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.
...It was a task so important that the company said in it
There's two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight 'spammers' who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.
You only just joined Twitter so you wouldn't have noticed the change, but even direct followers seem to be less likely to see a tweet if you've put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don't show up in the WaPo thread or by a URL search.
Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don't sho...
Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:
If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.
So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).
Other loose ends:
I can't find anything about tied votes in the bylaws - do they fail?
I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.
And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.
In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a ...
Why would Toner be related to the CIA, and how is McCauley NSA?
If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.
For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).
There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.
If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.
Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.
*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.)
They instead could have negotiated someone to replace her.
Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?
"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."
When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.
Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friend...
I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.
It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.
When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER ex...
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).
The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.
I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".
This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.
The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.
I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.
If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?
I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)
There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.
Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.
You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party.
A Confessor is forbidden kinds of information leak.
Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before w...
Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable.
It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.
It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?).
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.
The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.
They came at the king and missed.
Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."
It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.
I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.
AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.
The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.
If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
Keep in mind that the announcement was not something like
After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.
Instead, the board announced
...Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his
If they thought this would be the outcome of firing Sam, they would not have done so.
The risk they took was calculated, but man, are they bad at politics.
I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?
The important question is, why now? Why with so little evidence to back-up what is such an extreme action?
RE: the board’s vague language in their initial statement
Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.
I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.
It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.
Feels sometimes like all of the good EAs are bad at politics and everybody on our side that's good at politics is not a good EA.
Yeah, I'm getting that vibe. EAs keep going "hell yeah, we got an actual competent mafioso on our side, but they're actually on our side!", and then it turns out the mafioso wasn't on their side, any more than any other mafioso in history had ever been on anyone's side.
I'm surprised that nobody has yet brought up the development that the board offered Dario Amodei the position as a merger with Anthropic (and Dario said no!).
(There's no additional important content in the original article by The Information, so I linked the Reuters paywall-free version.)
Crucially, this doesn't tell us in what order the board made this offer to Dario and the other known figures (GitHub CEO Nat Friedman and Scale AI CEO Alex Wang) before getting Emmett Shear, but it's plausible that merging with Anthropic was Plan A all along. Moreover, I strongly suspect that the bad blood between Sam and the Anthropic team was strong enough that Sam had to be ousted in order for a merger to be possible.
So under this hypothesis, the board decided it was important to merge with Anthropic (probably to slow the arms race), booted Sam (using the additional fig leaf of whatever lies he's been caught in), immediately asked Dario and were surprised when he rejected them, did not have an adequate backup plan, and have been scrambling ever since.
P.S. Shear is known to be very much on record worrying that alignment is necessary and not likely to be easy; I'm curious what Friedman and Wang are on record as saying about AI x-risk.
Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)
https://twitter.com/i/web/status/1726526112019382275
"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
Most likely explanation is the simplest fitting one:
Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.
One thing I've realized more in the last 24h:
Maybe the extent of this was obvious to most others, but for me, while I was aware that this was going on, I feel like I underestimated the extent of it. One thing that put things into a different light for me is this tweet.
Which makes me wonder, could things really have gone down a lot differently? Sure, smoking-gun-type evidence would've helped the board immensely. But is it their fault that they don't have it? Not necessarily. If they had (1) t...
The board could (justifiably based on Sam's incredible mobilization the past days**) believe that they have little to no chance of winning the war of public opinion and focus on doing everything privately since that is where they feel on equal footing.
This doesn't explain fully why they haven't stated reasons in private, but it does seem they provided at least something to Emmett Shear as he said he had a reason from the board that wasn't safety or commercialization (PPS of https://twitter.com/eshear/status/1726526112019382275)
** Very few fires employees would even consider pushing back, but to be this successful this quickly is impressive. Not taking a side on it being good or evil, just stating the fact of his ability to fight back after things seemed gloom (betting markets were down below 10%)
He's back. Again. Maybe.
https://twitter.com/OpenAI/status/1727205556136579362
We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Anyone know how Larry or Bret feel about x-risk?
Fun story.
I met Emmett Shear once at a conference, and have read a bunch of his tweeting.
On Friday I turned to a colleague and asked for Shear's email, so that I could email him suggesting he try to be CEO, as he's built a multi-billion company before and has his head screwed on about x-risk.
My colleague declined, I think they thought it was a waste of time (or didn't think it was worth their social capital).
Man, I wish I had done it, that would have been so cool to have been the one to suggest it to him.
Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.
My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.
Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.
I agree, it's critical to have a very close reading of "The board did *not* remove Sam over any specific disagreement on safety".
This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word "specific", then I can't imagine why that word would have been inserted.
To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say "The board did not remove Sam over anything to do with safety". His inserting of that qualifier implies that he couldn't make a statement that broad, and therefore that safety considerations were involved in the removal.
The facts very strongly suggest that the board is not a monolithic entity. Its inability to tell a sensible story about the reasons for Sam's firing might be due to such a single comprehensible story not existing but different board members having different motives that let them agree on the firing initially but ultimately not on a story that they could jointly endorse.
There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.
(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")
(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")
If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look l...
I love how short this post is! Zvi, you should do more posts like this (in addition to your normal massive-post fare).
Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:
Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.
#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:
Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.
Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.
This conflict has inescapably taken place in the context of US-China competition over AI, as leaders in both countries are well known to pursue AI acceleration for applications like autonomous low-flying nuclear cruise missiles (e.g. in contingencies where military GPS networks fail), economic growth faster than the US/China/rest of the world, and information warfare.
I think I could confidently bet against Chinese involvement, that seems quite reasonable. I can't bet so confidently against US involvement; although I agree that it remains largely unclear, i...
There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
Does anyone have proof of the board's unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.
The theory that my mind automatically generates seeing these happenings is that Ilya was in cahoots with Sam&Greg, and the pantomime was a plot to oust external members of the board.
However, I like to think I'm wise enough to give this 5% probability on reflection.
Is there any chance that Altman himself triggered this? Did something that he knew would cause the board to turn on him, with knowledge that Microsoft would save him?
I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership.
Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.
Here, two of the board members lacked the organizational exper...
What about this?
https://twitter.com/robbensinger/status/1726387432600613127
We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.
If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
No link for this one?
What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?
Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.
Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.
What matters is not theory but practice. What happens when the chips are down?
So what happened? What prompted it? What will happen now?
To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.
Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.
(Note: All times stated here are eastern by default.)
Just the Facts, Ma’am
What do we know for sure, or at least close to sure?
Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:
Here are the most noteworthy things we know happened, as best I can make out.
Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.