There's apparently been a lot of EA-hate on twitter as a result. I personally expect this to matter very little, if at all, in the long run, but I'd expect it to be extremely disproportionately salient to rationalists/EAs/alignment folk.
I think this matters insofar as thousands of tech people just got radicalized into the "accelerationist" tribe, and have a narrative of "EAs destroy value in their arrogance and delusion." Whole swaths of Silicon Valley are now primed to roll their eyes at and reject any technical, governance, or policy proposals justified by "AI safety."
See this Balaji thread for instance, which notably is close to literally correct (like, yeah, I would endorse "anything that reduces P(doom) is good"), but slips in the presumption that "doomers" are mistaken about which things reduce P(doom) / delusional about there being doom at all. Plus a good dose of the attack-via-memetic-association (comparing "doomers" to the unibomber). This is basically just tribalism.
I don't know if this was inevitable in the long run. It seems like if the firing of Sam Altman was handled better, if they had navigated the politics so that Sam "left to spend more time with his family" or whatever is typical when you oust a CEA, we could have avoided this impact.
There are what get called "stand by the Levers of Power" strategies and I don't know if they're good, but things like getting into positions within companies and governments that let you push for better AI outcomes, and I do think SBF might have made that a lot harder.
I think this is an important point: one idea that is very easy to take away from the FTX and OpenAI situations is something like
People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules. Any agreement you make with an EA-associated person should be assumed to have an "unless I think the world would be better if I broke this agreement" rider (in addition to the usual "unless I stand to personally gain a lot by breaking this agreement" rider that people already expect and have developed mitigations for).
Basically, I expect that the strategy of "attempt to get near the levers of power in order to be able to execute weird plans where, if the people in charge of the decision about whether to let you near the levers of power knew about your pl...
People associated with EA are likely to decide at some point that the normal rules for the organization do not apply to them, if they expect that they can generate a large enough positive impact in the world by disregarding those rules.
I am myself consequentialist at my core, but invoking consequentialism to justify breaking commitments, non-cooperation, theft, or whatever else is just a stupid bad policy (the notion of people doing this generates some strong emotions for me) that as a policy/algorithm, won't result in accomplishing one's consequentialist goals.
I fear what you say is not wholly inaccurate and is true of at least some in EA, I hope though it's not that true of many.
Where it does get tricky is potentially unilateral pivotal acts about which I think go in this direction but also feel different from what you describe.
A few others have commented about how MSFT doesn't necessarily stifle innovation, and a relevant point here is that MSFT is generally pretty good at letting its subsidiaries do their own thing and have their own culture. In particular GitHub (where I work), still uses Google Workspace for docs/email, slack+zoom for communication, etc. GH is very much remote-first whereas that's more of an exception at MSFT, and GH has a lot less suffocating bureaucracy, and so on. Over the years since the acquisition this has shifted to some extent, and my team (Copilot) is more exposed to MSFT than most, but we still get to do our own thing and at worst have to jump through some hoops for compute resources. I suspect if OAI folks come under the MSFT umbrella it'll be as this sort of subsidiary with almost complete ability to retain whatever aspects of its previous culture that it wants.
Standard disclaimer: my opinions are my own, not my employer's, etc.
For me the crux is influence of these events on Sutskever ending up sufficiently in charge of a leading AGI project. It appeared borderline true before; it would've become even more true than that if Altman's firing stuck without disrupting OpenAI overall; and right now with the strike/ultimatum letter it seems less likely than ever (whether he stays in an Altman org or goes elsewhere).
(It's ambiguous if Anthropic is at all behind, and then there's DeepMind that's already in the belly of Big Tech, so I don't see how timelines noticeably change.)
The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company.
A potential counter to (1): OpenAI's success could be dependent on having all (or some key subset) of their people centralized and collaborating.
Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving...
GPT-4 is the model that has been trained with the most training compute which suggests that compute is the most important factor for capabilities. If that wasn't true, we would see some other company training models with more compute but worse performance which doesn't seem to be happening.
Falcon-180b illustrates how throwing compute at an LLM can result in unusually poor capabilities. Epoch's estimate puts it close to Claude 2 in compute, yet it's nowhere near as good. Then there's the even more expensive PaLM 2, though since weights are not published, it's possible that unlike with Falcon the issue is that only smaller, overly quantized, or incompetently tuned models are being served.
FTR I am not spending much time calculating the positive or negative direct effect of this firing. I am currently pretty concerned about whether it was done honorably and ethically or not. It looks not to me, and so I oppose it regardless of the sign of the effect.
It entirely depends on the reasoning.
Quick possible examples:
Generally not telling the staff why was extremely disrespectful, and generally not highlighting it to him ahead of time, is also uncooperative.
Maybe, yeah. Definitely strongly agree with not telling the staff a more complete story seems to be bad for both intrinsic and instrumental reasons.
I'm a bit unsure how wise it would be to tip Altman off in advance given what we've seen he can mobilize in support of himself.
And I think it's a thing that only EAs would think up that it's valuable to be cooperative towards people who you're convinced are deceptive/lack integrity. [Edit: You totally misunderstood what I meant here; I was criticizing them for doing this too naively. I was not praising the norms of my in-group. Your reply actually confused me so much that I thought you were being snarky in some really strange way.] Of course, they have to consider all the instrumental reasons for it, such as how it'll reflect on them if others don't share their assessment of the CEO lacking integrity.
Minor point: the Naskapi hunters didn't actually do that. That was speculation which was never verified, runs counter to a lot of facts, and in fact, may not have been about aboriginal hunters at all but actually inspired by the author's then-highly-classified experiences in submarine warfare in WWII in the Battle of the Atlantic. (If you ever thought to yourself, 'wow, that Eskimo story sounds like an amazingly clear example of mixed-strategies from game theory'...) See some anthropologist criticism & my commentary on the WWII part at https://gwern.net/doc/sociology/index#vollweiler-sanchez-1983-section
I think this misses one of the main outcomes I'm worried about, which is if Sam comes back as CEO and the board is replaced by less safety-motivated people. This currently seems likely (Manifold at 75% Sam returning, at time of posting).
You could see this as evidence that the board never had much power, and so them leaving doesn't actually change anything. But it seems like they (probably) made a bunch of errors, and if they hadn't then they would have retained influence to use to steer the org in a good direction.
(It is also still super unclear wtf is going on, maybe the board acted in a reasonable way, and can't say for legal (??) reasons.)
You could see this as evidence that the board never had much power, and so them leaving doesn't actually change anything.
In the world where Sam Altman comes back as CEO and the board is replaced by less safety-motivated people (which I do not currently expect on an inside view), that would indeed be my interpretation of events.
In the poll most people (31) disagreed with the claim John is defending here, but I'm tagging the additional few (3) who agreed with it @Charlie Steiner @Oliver Sourbut @Thane Ruthenis
Interested to hear your guys' reasons, in addition to John's above!
Quick dump.
Impressions
Assumptions
Other
Hard to say what the effect is on Sam's personal brand; lots to still cash out I expect. It could enhance his charisma, or he might have spent something which is hard to get back.
I think my model of cycles of power expansion and consolidation is applicable here:
When you try to get something unusual done, you "stake" some amount of your political capital on this. If you win, you "expand" the horizon of the socially acceptable actions available to you. You start being viewed as someone who can get away with doing things like that, you get an in with more powerful people, people are more tolerant of you engaging in more disruptive action.
But if you try to immediately go for the next, even bigger move, you'll probably fail. You need buy-in from other powerful actors, some of which have probably only now became willing to listen to you and entertain your more extreme ideas. You engage in politicking with them, arguing with them, feeding them ideas, establishing your increased influence and stacking the deck in your favor. You consolidate your power.
I'd model what happened as Sam successfully expanding his power. He's staked some amount of his political capital on the counter-revolution,...
And on top of that, my not-very-informed-impression-from-a-distance is that [Sam]'s more a smile-and-rub-elbows guy than an actual technical manager
I agree, but I'm not sure that's insufficient to carve out a productive niche at Microsoft. He appears to be a good negotiator, so if he goes all-in spending his political capital to ensure his subsidiary isn't crippled by bureaucracy, he has a good chance of achieving it.
The questions are (1) whether he'd realize he needs to do that, and (2) whether he'd care to do that, versus just negotiating for more personal power and trying to climb to Microsoft CEO or whatever.
I mean, I don't really care how much e.g. Facebook AI thinks they're racing right now. They're not in the game at this point.
The race dynamics are not just about who's leading. FB is 1-2 years behind (looking at LLM metrics), and it doesn't seem like they're getting further behind OpenAI/Anthropic with each generation, so I expect that the lag at the end will be at most a few years.
That means that if Facebook is unconstrained, the leading labs have only that much time to slow down for safety (or prepare a pivotal act) as they approach AGI before Facebook gets there with total recklessness.
If Microsoft!OpenAI lags the new leaders by less than FB (and I think that's likely to be the case), that shortens the safety window further.
I suspect my actual crux with you is your belief (correct me if I'm misinterpreting you) that your research program will solve alignment and that it will not take much of a safety window for the leading lab to incorporate the solution, and therefore the only thing that matters is finishing the solution and getting the leading lab on board. It would be very nice if you were right, but I put a low probability on it.
I was going to write stuff about integrity, and there's stuff to that, but the thing that is striking me most right now is that the whole effort seemed very incompetent and naive. And that's upsetting.
I am now feeling uncertain about the incompetence and naivety of it. Whether this was the best move possible that failed to work out, or best move possible that actually did get a good outcome, or a total blunder is determined by info I don't have.
I have some feeling of they were playing against a higher-level political player which both makes it hard but als...
The initial naive blunder was putting Sam Altman in CEO position to begin with. It seems like it was predictable-in-advance (from e. g. Paul Graham's comments from years and years ago) that he's not the sort of person to accept being fired, rather than mounting a realpolitik-based counteroffensive, and that he would be really good at the counteroffensive. Deciding to hire him essentially predestined everything that just happened; it was inviting the fox into the henhouse. OpenAI governance controls might have worked if the person subjected to them was not specifically the sort of person Sam is.
How was the decision to hire him made, and under what circumstances?
What needs to happen for this sort of mistake not to be repeated?
Missing from this discussion is the possibility that Sam might be reinstated as CEO, which seems like a live option at this point. If that happens I think it's likely that the decision to fire him was a mistake.
I think this discussion is too narrow and focused on just Sama and Microsoft.
The global market "wants" AGI, ASI, human obsolescence*.
The consequences of this event accelerate that:
Case 1: Microsoft bureaucracy drags Sama's teams productivity down to zero. In this case, OpenAI doesn't develop a GPT-5, and Microsoft doesn't release a better model either. This opens up the market niche for the next competitor at a productive startup to develop the model, obviously assisted by former openAI employees who bring all the IP with them, and all the money and b
Microsoft is the sort of corporate bureaucracy where dynamic orgs/founders/researchers go to die. My median expectation is that whatever former OpenAI group ends up there will be far less productive than they were at OpenAI.
I'm a bit sceptical of that. You gave some reasonable arguments, but all of this should be known to Sam Altman, and he still chose to accept Microsoft's offer instead of founding his own org (I'm assuming he would easily able to raise a lot of money). So, given that "how productive are the former OpenAI folks at Microsoft?" is the crux of the argument, it seems that recent events are good news iff Sam Altman made a big mistake with that decision.
microsoft has put out some pretty impressive papers lately. not sure how that bodes for their overall productivity, of course.
Thanks for the good discussion.
I could equally see these events leading to AI capability development speeding or slowing. Too little is known about the operational status quo that has been interrupted for me to imagine counterfactuals at the company level.
But that very lack of information gives me hope that the overall PR impact of this may (counterintuitively) incline the Overton window toward more caution.
"The board should have given the press more dirt to justify this action!" makes sense as an initial response. When this all sinks in, what will people ...
One of my takeaways of how the negotiations went is that it seems sama is extremely concerned with securing access to lots of compute, and that the person who ultimately got their way was the person who sat on the compute.
The "sama running Microsoft" idea seems a bit magical to me. Surely the realpolitik update here should be: power lies in the hands of those with legal voting power, and those controlling the compute. Sama has neither of those things at Microsoft. If he can be fired by a board most people have never heard of, then for sure he can get fired...
People seem to think he is somehow a linchpin of building AGI. Remind me... how many of OpenAI's key papers did he coauthor?
Altman's relevant superpowers are expertise at scaling of orgs and AI-related personal fame and connections making him an AI talent Schelling point. So wherever he ends up, he can get a world class team and then competently scale its operations. The personality cult is not specious, it's self-fulfilling in practical application.
I tend to view the events of OpenAI's firing of Sam Altman much more ambiguously than others, and IMO, it probably balances out to nothing in the end, so I don't care as much as some other people here.
To respond more substantially:
From johnswentworth:
Here's the high-gloss version of my take. The main outcomes are:
...The leadership who were relatively most focused on racing to AGI and least focused on safety are moving from OpenAI to Microsoft. Lots of employees who are relatively more interested in racing to AGI than in safety will probably follow. Micro
I wouldn't count on Microsoft being ineffective, but there's good reason to think they'll push for applications for the current state of the art over further blue sky capabilities stuff. The commitment to push copilot into every Microsoft product is already happening, the copilot tab is live in dozens of places in their software and in most it works as expected. It's already good enough to replace 80%+ of the armies of temps and offshore warm bodies that push spreadsheets and forms around today without any further big capabilities gains, and th...
It seems intuitively bad:
One fear I have is that the open source community will come out ahead, and push for greater weight sharing of very powerful models.
Edit: To make more specific, I mean that the open source community will become more attractive, because they will say, you cannot rely on individual companies whose models may or may not be available. You must build on top of open source. Related tweet:
https://twitter.com/ylecun/status/1726578588449669218
Whether their plan works or not, dunno.
I read the whole thing, glad I did. It really makes me think that many of AI safety's best minds are doing technical work like technical alignment 8 hours a day, when it would be better for them to do 2 hours a day to keep their skills honed, and spend 6 hours a day acting as generalists to think through the most important problems of the moment.
...They should have shared their reasons/excuses for the firing. (For some reason, in politics/corporate politics, people try to be secretive all the time and this seems-to-me to be very stupid in like 80+% of cases,