Eliezer Yudkowsky writes on twitter:
Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival; previously there was a nascent spirit of cooperation, which Elon completely blew up to try to make it all be about who, which monkey, got the poison banana, and by spreading and advocating the frame that everybody needed their own "demon" (Musk's old term) in their house, and anybody who talked about reducing proliferation of demons must be a bad anti-openness person who wanted to keep all the demons for themselves.
Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.
Previously all the AGI people were at the same conference talking about how humanity was going to handle this together. Elon Musk didn't like Demis Hassabis, so he blew that up. That's the impact of his life. The end.
I've found myself repeatedly uncertain about what to make of OpenAI and their impact. The most recent LessWrong discussion that I'm aware of has happened on the post Will OpenAI-s work unintentionally increase existential risk, but the arguments for negative impact are different from the thing Eliezer named.
I'm also not entirely sure whether publicly debating sensitive questions like whether a person or organization accidentally increased existential risk is a good idea in the first place. However, spontaneously bringing up the issue on a twitter thread is unlikely to be optimal. At the very least, it should be beneficial to discuss the meta question, i.e., how we should or shouldn't talk about this. With that in mind, here are three things I would like to understand better:
- Concretely speaking, should we be hesitant to talk about this? If so, what kind of discussions are okay?
And -- conditional on discussing them being a good idea:
-
What is the more detailed story of how the "nascent spirit of cooperation" has degraded or changed since the inception of OpenAI?
-
What interventions are possible here, if any? (Is it really that difficult to organize some kind of outreach to Elon to try and reverse some of the effects? Naively speaking, my impression has been that our community is sufficiently well connected to do this, and that Elon is amenable to arguments.)
I'm less interested in estimating the total impact of any specific person.
I don't know what you mean by skipped. Here's some more concreteness though:
--Thanks to OpenAI, there is more of an "AI research should be made available to everyone" ethos, more of a "Boo anyone who does AI research and doesn't tell the world what they did or how they did it or even decides not to share the weights!" Insofar as this ethos persists during the crucial period, whichever labs are building AGI will be under more internal and external pressure to publish/share. This makes it harder for them to go slow and be cautious when the stakes are high.
--Thanks to OpenAI, there were two world-leading AGI labs, not one. Obviously it's a lot harder to coordinate two than one. This is not as bad as it sounds because plausibly before the crucial period more AGI labs would have appeared anyway. But still.
--Thanks to OpenAI, scaling laws and GPT tech are public knowledge now. This is a pretty big deal because it's motivating lots of other players to start building AGI or AGI-like things, and because it seems to be opening up lots of new profit opportunities for the AI industry which will encourage further investment, shortening timelines and increasing the number of actors that need to coordinate. Again, presumably this would have happened eventually anyway. But OpenAI made it happen faster.