I have no data on OpenAI situation, but #8 has crossed my mind. (It reminded of the communist elections where the Party got 99% approval.) If Sam Altman returns -- and if he is the kind of person some people describe him as -- you do not want to be one of the few who didn't sign the public letter calling for his return. That would be like putting your name on a public short list of people who don't like the boss.
Of course, #5 is also likely. But notice that the entire point of having the board was to prevent the #5 reasoning to rule the company. Which means that ~all OpenAI employees oppose the OpenAI Charter. Which means that Sam Altman won the revolution (by strategically employing/keeping the kind of people who oppose the company Charter) long before the board even noticed that it started.
(I find it amusing that the document that people in communist Czechoslovakia were afraid not to sign publicly, so that they don't lose their jobs, was called... Anticharter.)
Which means that ~all OpenAI employees oppose the OpenAI Charter.
It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.
It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say "the government should not abridge freedom of speech or the press", all start railing about how 'this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding - freedom of speech doesn't mean freedom from consequences, like being punished by laws!'
Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn't believe in the mission...
Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman's facial expressions:
...LEX FRIDMAN: How do you hire? How do you hire great teams? The folks I’ve interacted with, some of the most amazing folks I’ve ever met.
SAM ALTMAN: It takes a lot of time. I mean, I think a lot of people claim to spend a third of their time hiring. I for real truly do. I still approve every single hire at OpenAI. And I think we’re working on a problem that is like very cool and that great
Disclaimer: I do not work at OpenAI and have no inside knowledge of the situation.
I work in the finance industry. (Personal views are not those of my employer, etc, etc).
Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.
My boss and my boss's boss held a meeting with the rest of the team on the day those people left, explaining what had happened, reassuring us that no further layoffs were planned, describing who would be taking over what parts of the responsibilities of the laid-off people, etc.
On my understanding of employment, this was just...sort of...the basic standard of professionalism and courtesy?
If I had found out about layoffs at my firm through media coverage, or when I tried to email a coworker and their email no longer worked, I would be unhappy. If the only communication I got from above about reasons for the layoffs was that destroying the company 'would be consistent with the mission', I would be very unhappy. In any of those cases, I would strongly consider looking for jobs elsewhere.
It has sometimes seemed to me that the EA/nonprofit space does not follow the rules I am familiar with for the employer/employee relationship. Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.
Nevertheless, any OpenAI employee with views similar to my own would be concerned and plausibly looking for a new job after the board fired the CEO with no justification or communication. If you want a one-sentence summary of the thought process, it could be:
'If this is how they treat the CEO, how will they treat me?'
'If this is how they treat the CEO, how will they treat me?'
You just explained why it's totally disanalogous. An ordinary employee is not a CEO {{citation needed}}.
I laughed out loud on this line...
Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.
...and then I wondered if you've seen Margin Call? It is truly a work of art.
My experiences are mostly in startups, but rarely on the actual founding team, so I have seen more stuff that was unbuffered by kind, diligent, "clueless" bosses.
My general impression is that "systems and processes" go a long way into creating smooth rides for the people at the bottom, but tho...
More or less all of it, I think.
So there you have it: a relatively good boss is ousted by the board you know nothing about for unclear reasons, people close to the epicenter are running around telling you how it's all going to implode now and how we have this costless way to maybe avert it, they're being really pushy about it, it's all very confusing and scary, more and more of the people around you are signing the letter, there's an increasing atmosphere that signing it is just what an OpenAI employee does – would you really not sign?
Which isn't to say it wasn't an impressive accomplishment. The level of coordination required to pull this off was doubtlessly high, it would've required handling all of the aforementioned covert messaging about carrots-and-sticks with a minimal degree of competence, it required the foundation of Sam Altman establishing himself as a good leader, etc.
But I'm wholly unsurprised it worked.
It seemed like a classic case of prisoner's dilemma, so (5) and (7). The more of your company that signs the petition, the lower the value of your PPUs, making it more attractive to sign. It reached a point where they felt OpenAI's value and their PPUs went to nothing if a critical mass joined Microsoft. In fact, if MS was willing to match compensation, everyone "cooperating" by not signing the petition is a worse outcome for everyone than just joining MS because they had already seen other players move first (Altman, Brockman, other resignations) - that is if we look purely at compensation (not even taking into account the possibility that PPU-equivalent at MS would not be profit capped). In textbook prisoner's dilemma, cooperation leads to the best overall outcome for everyone, yet the best move is to defect if you are unable to coordinate, which is not really the case here.
Further, even if an OAI employee did not care about PPUs at all, and all they care about is the non-profit mission of AI for the betterment of all humanity, they might have felt there was a greater likelihood of achieving that mission at Microsoft than the empty shell of OAI (the safety teams for example - might as well do you best to help safety at the new "leading" organisation, and get paid too).
Not sure if this page is broken or I'm technically inept, but I can't figure out how to reply to qualiia's comment directly:
Primarily #5 and #7 was my gut reaction, but quailia's post articulates rationale better than I could.
One useful piece of information that would influence my weights: what was OAI's general hiring criteria? If they sought solely "best and brightest" on technical skills and enticed talent primarily with premiere pay packages, I'd lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #...
Suppose you're an engineer at SpaceX. You've always loved rockets, and Elon Musk seems like the guy who's getting them built. You go to work on Saturdays, you sometimes spend ten hours at the office, you watch the rockets take off and you watch the rockets land intact and that makes everything worth it.
Now imagine that Musk gets in trouble with the government. Let's say the Securities and Exchange Commission charges him with fraud again, and this time they're *really* going after him, not just letting him go with a slap on the wrist like the first time. SpaceX's board of directors negotiates with SEC prosecutors. When they emerge they fire Musk from SpaceX, and remove Elon and Kimbal Musk from the board. They appoint Gwynne Shotwell as the new CEO.
You're pretty worried! You like Shotwell, sure, but Musk's charisma and his intangible magic have been very important to the company's success so far. You're not sure what will happen to the company without him. Will you still be making revolutionary new rockets in five years, or will the company regress to the mean like Boeing? You talk to some colleagues, and they're afraid and angry. No one knows what's happening. Alice says that the company would be nothing without Musk and rails at the board for betraying him. Bob says the government has been going after Musk on trumped-up charges for a while, and now they finally got him. Rumor has it that Musk is planning to start a new rocket company.
Then Shotwell resigns in protest. She signs an open letter calling for Musk's reinstatement and the resignation of the board. Board member Luke Nosek signs it too, and says his earlier vote to fire Musk was a huge mistake.
You get a Slack message from Alice saying that she's signed the letter because she has faith in Musk and wants to work at his company, whichever company that is, in order to make humanity a multiplanetary species. She asks if you want to sign.
How do you feel?
Replying to David Hornbein.
Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.
Not that I would agree with that, assuming this is correct.
>Now imagine that Musk gets in trouble with the government
Now image the same scenario but Elon has not gotten in trouble with the government and multiple people (including those who fired him) have affirmed he did nothing wrong.
I have no inside information. My guess is #5 with a side of 1, 6, and "the letter wasn't legally binding anyway so who cares."
I think that the lesson here is that if your company says "Work here for the principles in this charter. We also pay a shitload of money" then you are going to get a lot of employees who like getting paid a shitload of money regardless of the charter, because those are much more common in the population than people who believe the principles in the charter and don't care about money.
These 3 items seem like they would be sufficient to cause something like the Open Letter to happen.
In most cases number 3 is not present which I think is why we don't see things like this happen more often in more organisations.
None of this requires Sam to be hugely likeable or a particularly savvy political operator, just that people generally like him. People seem to suggest he was one or both so this just makes the letter more likely.
I'm sure this doesn't explain it all in OpenAI's case - some/many employees would also have been worried about AI safety which complicates the decision - but I suspect it is the underlying story.
I think #5+#6. The people with the most stock tend to be the bosses of the others — the "social pressure" of your boss telling you to sign right now is quite persuasive.
#5 was quite concrete and short term: there was a deal with Thrive where employees were about to be able to sell their stock at an 86B valuation, and that wasn't going to go through with a new company direction.
I’m confused why the board just didn’t wait a few weeks to announce it after the sale. Seems like a huge blunder unless they were that pressed for time.
unless they were that pressed for time.
They were because they had an extremely fragile coalition and only a brief window of opportunity.
They certainly did not have the power to tell Altman they were going to fire him in several weeks and expect that to stick. None of them, Sutskever included, have ever struck me as that suicidally naive. And it looks like they had good reason to expect that they had little time given the Slack comments Sutskever saw.
Also, remember that Altman has many, many options available to him. Since people seem to think that the board could've just dicked around and had the luxury of waiting a long time, I will highlight one specific tactic that the board should have been very worried about, which possibility did not permit any warning or hint to Altman, and which required moving as fast as possible once reality sank in & they decided to not cede control over OA to Altman: (WSJ)
Some OpenAI executives told her [Helen Toner] that everything relating to their company makes its way into the press.
That is, Altman (or those execs) had the ability to deniably manufacture a Toner scandal at any second by calling up a friendly reporter at, say, The Information, to highlight the (public) paper, which about an hour later (depending on local Pacific Time), would then 'prove' him right about it and provide grounds for an emergency board meeting that day to vote on expelling Toner if she was too stubborn to 'resign'. After which, of course, they would need to immediately vote on new board members to fill out a far-too-small board with Toner gone, whether or not that had been on the official agenda, and this new board would, of course, have to approve of any prior major decisions like 'firing the CEO'. Now, Altman hadn't done this because Altman didn't want the cost of a public scandal, however much of a tempest-in-a-teapot-nothingburger it would be, he was very busy with other things which seemed higher priority and had been neglecting the board, and he didn't think he needed to pay that cost to get Toner off the board. But if he suddenly needed Toner off the board fast as his #1 priority...
The board did not have 'a few weeks'. (After all, once that complex and overwhelmingly important sale was wrapped up... Altman would be less busy and turning his attention to wrapping up other unfinished business he'd neglected.) They did not have days. For all they knew, they could even have had negative hours if Altman had gotten impatient & leaked an hour ago & the scandal had started while they were still discussing what to do. Regardless of whether Toner realized the implied threat at the time (she may have but been unable to do anything about it), once they had Sutskever, they needed to move as fast as possible.
Even if they had decided to take the risk of delay, the only point would have been to do something that would not alert Altman at all, which would be... what, exactly? What sort of meaningful preparation demanded by the board's critics could have been done under those constraints? (Giving Satya Nadella a heads-up? Altman would know within 10 minutes. Trying to recruit Brockman to stay on? 1 minute.)
So, they decided quickly to remove Altman and gave him roughly the minimum notice required by the bylaws of 48h*, without being able to do much besides talk to their lawyers and write the press release - and here we are.
* you may be tempted to reply 'then Altman couldn't've kicked Toner out that fast because he'd need that 48h notice too'; you are very clever, but note that the next section says they can all waive that required notice at the tap of a button, and if he called an 'emergency meeting' & they still believed in him, then they of course would do so - refusing to do so & insisting on 48h amounts to telling him that the jig is up. Whereas them sending him notice for an 'ordinary' meeting in 48h is completely normal and not suspicious, and he had no clue.
For one thing, this wouldn't be very kind to the investors.
For another, maybe there were some machinations involving the round like forcing the board to install another member or two, which would allow Sam to push out Helen + others?
I also wonder if the board signed some kind of NDA in connection with this fundraising that is responsible in part for their silence. If so this was very well schemed...
This is all to say that I think the timing of the fundraising is probably very relevant to why they fired Sam "abruptly".
It's a mixture of reasons...
But, first of all, a lot of people (not just people in OpenAI) love Sam on the personal level, that's very clear, and they love both what he is doing (with OpenAI, with Helion, with Retro Biosciences), and how he is presenting himself, what he is saying, his demeanor, and so on.
Next key factor was that any outcome besides Sam's return would have damaged the company and the situation a lot at the worst possible moment, when the company had a clear lead, was riding a huge wave of success, had absolutely best models, and so on. They all understood how crucial was the role Sam played in all that, and how crucial would his role be in the future too. So there were making the strongest possible play to prevent any outcome besides Sam's return. They expected to win, they were playing to maximize the chances of winning, and they did not expect to lose and then to have to decide if they really want to join MSFT (both having to join MSFT and having to stay in the semi-destroyed OpenAI would be bad compared to what they had).
But out of the factors listed, 1+2+3+(4 for many of them, not for all)+5+(6 for some of them)+(7, not so much being afraid of "imploding", but more afraid of becoming a usual miserable corporate place, where one drags oneself to work instead of enjoying one's work)
Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case
The alternative looked like the outright destruction of the company: "We are unable to work for or with people that lack competence, judgement and care for our mission and employees."
Recently, OpenAI employees signed an open letter demanding that the board reinstate Sam Altman, add other board members (giving some names of people allied with Altman), and resign, or else they would quit and follow Altman to Microsoft.
Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case, unless they're dumber than I thought. So, why did they sign? Here are some possible reasons that come to mind:
Which of those reasons do you think drove people signing that letter, and why do you think so?