Well that sounds like amazing news!
All the smart people trying to accelerate AI are going to go somewhere, and I have trouble thinking of any company which beats Microsoft in their track-record of having a research lab absolutely packed with brilliant researchers, yet producing hardly any actual impact on anything. I guess there was Kinect? And probably some backend-y language/compiler/database research managed to be used internally at some point? But yeah, I sure do have an impression of Microsoft as the sort of lumbering big company where great research or tech is developed by one team and then never reaches anybody else.
In addition to this, Microsoft will exert greater pressure to extract mundane commercial utility from models, compared to pushing forward the frontier. Not sure how much that compensates for the second round of evaporative cooling of the safety-minded.
Microsoft practices "Embrace and extinguish" or "monopolistic copier" as their corporate philosophy. So you can expect them to reproduce a mediocre version of gpt-4 - probably complete with unreliable software and intrusive pro Microsoft ads - and to monopolistically occupy the "niche". Maybe. They are really good at niche defense so they would keep making the model better.
Don't celebrate too early though. Chaos benefits accelerationists. Diversity of strategy. If multiple actors - governments, corporations, investors, startups - simply choose what to do randomly, there is differential utility gain in favor of AI. More AI, stronger AI, uncensored and unrestricted AI. All of these things will give the actors who improve AI more investment and so on in a runaway utility gain. (This is the Fermi paradox argument as well. So long as alien species have a diversity of strategy and the tech base for interstellar travel, the expansionists will inevitably fill the stars with themselves)
This is why one point of view is to say that since other actors are certain to have powerful AGI at their disposal as soon as the compute is available to find it, your best strategy is to be first or at least not to be behind by much.
In the age of sail, if everyone else is strapping cannons on their boats, you better be loading your warships with so many guns the ship barely floats. Asking for an international cannon ban wasn't going to work, the other signatories would claim to honor it and then in the next major naval battle, open up their gun ports.
the sort of lumbering big company where great research or tech is developed by one team and then never reaches anybody else
... except one of our primary threat models is accident risk where the tech itself explodes and the blast wave takes out the light cone. Paraphrasing, the sort of "great tech" that we're worrying about is precisely the tech that would be able to autonomously circumvent this sort of bureaucracy-based causal isolation. So in this one case, it matters comparatively little how bad Microsoft is at deploying its products, compared to how well it can assist their development.
I mean, I can buy that Microsoft is so dysfunctional that just being embedded into it would cripple OpenAI's ability to even do research, but it sounds like Sam Altman is pretty good at what he does. If it's possible to do productive work as part of MS at all, he'd probably manage to make his project do it.
I hope this doesn't lead to everyone sorting into capabilities (microsoft) vs safety (openai). OpenAI's ownership was designed to preserve safety commitments against race dynamics, but microsoft has no such obligations, a bad track record (Sydney), and now the biggest name in AI. Those dynamics could lead to talent/funding/coverage going to capabilities unchecked by safety, which would increase my p(doom).
Two caveats:
I appreciate the joke, but I think that Sam Altman is pretty clearly "the biggest name in AI" as far as the public is concerned. His firing/hiring was the leading story in the New York Times for days in a row (and still is at time of writing)!
I mean, by that standard I'd say Elon Musk is the biggest name in AI. But yeah, jokes aside I think bringing on Altman even for a temporary period is going to be quite useful for Microsoft attracting talent and institutional knowledge from OpenAI, as well as reassuring investors.
I think it's important to remind people that dramaposting about OpenAI leadership is still ultimately dramaposting. Make the update on OpenAI's nonprofit leadership structure having an effect, etc., and keep looking at the news about once a day until the events stop being eventful. While you're doing that, keep in mind that ultimately the laminated monkey hierarchy is not what's important about OpenAI or any of these other firms, at least terminally.
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
What's different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
I don't disagree! Even if you're not involved directly in the goings on, it's probably still important to tune in once a day or so.
Ummm, the laminated monkey hierarchy is going to determine exactly who launches the first AGI, and therefore who makes the most important call in humanity's history.
If we provide them a solid alignment solution that makes their choice easier, but it's still going to be some particular person's call.
Based on the sentiment expressed by OpenAI employees on Twitter, the ones who are (potentially) leaving are not doing so because of a disagreement with the AI Safety approach, but rather how the entire situation was handled by the board (e.g. the lack of reasons provided for firing Sam Altman).
If this move was done for the sake of AI safety, wouldn't OpenAI risk disgruntling employees who would otherwise be aligned with the original mission of OpenAI?
Can anybody here think of potential reasons why the board has not disclosed further details about their decision?
That's very interesting.
I think it's very good that board stood their ground, and maybe a good thing OpenAI can keep focusing on their charter and safe AI and keep commercialization in Microsoft.
People that don't care about alignment can leave for the fat paycheck, while commited ones stay at OpenAI.
What are your thought on implications of this for alignment?