Recently, OpenAI employees signed an open letter demanding that the board reinstate Sam Altman, add other board members (giving some names of people allied with Altman), and resign, or else they would quit and follow Altman to Microsoft.
Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case, unless they're dumber than I thought. So, why did they sign? Here are some possible reasons that come to mind:
- Altman is just really likeable for people like them - they just like him.
- They felt a sense of injustice and outrage over the CEO being fired that they'd never felt over lower-level employees being fired.
- They were hired or otherwise rewarded by Altman and thus loyal to him personally.
- They believed Altman was more ideologically aligned with them than any likely replacement CEO (including Emmett Shear) would be.
- They felt their profit shares would be worth more with Altman leading the company.
- They were socially pressured by people with strong views from (3) or (4) or (5).
- They were afraid the company would implode and they'd lose their job, and wanted the option of getting hired at a new group in Microsoft, and the risk of signing seemed low once enough other people already signed.
- They were afraid Altman would return as CEO and fire or otherwise punish them if they hadn't signed.
- Something else?
Which of those reasons do you think drove people signing that letter, and why do you think so?
Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman's facial expressions:
I think it's also important to do three-body-problem thinking with this situation; it's also possible that Microsoft or some other third party might have gradually but successfully orchestrated distrust/conflict between two good-guy factions or acquired access to the minds/culture of OpenAI employees, in which case it's critical for the surviving good guys to mitigate the damage and maximize robustness against third parties in the future.
For example, Altman was misled to believe that the board was probably compromised and he had to throw everything at them, and the board was mislead to believe that Altman was hopelessly compromised and they had to throw everything at him (or maybe one of them was actually compromised). I actually wrote about that 5 days before the OpenAI conflict started (I'd call that a fun fact but not a suspicious coincidence, because things are going faster now, 5 days in 2023 is like 30 days in 2019 time).