Eliezer Yudkowsky writes on twitter:
Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival; previously there was a nascent spirit of cooperation, which Elon completely blew up to try to make it all be about who, which monkey, got the poison banana, and by spreading and advocating the frame that everybody needed their own "demon" (Musk's old term) in their house, and anybody who talked about reducing proliferation of demons must be a bad anti-openness person who wanted to keep all the demons for themselves.
Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.
Previously all the AGI people were at the same conference talking about how humanity was going to handle this together. Elon Musk didn't like Demis Hassabis, so he blew that up. That's the impact of his life. The end.
I've found myself repeatedly uncertain about what to make of OpenAI and their impact. The most recent LessWrong discussion that I'm aware of has happened on the post Will OpenAI-s work unintentionally increase existential risk, but the arguments for negative impact are different from the thing Eliezer named.
I'm also not entirely sure whether publicly debating sensitive questions like whether a person or organization accidentally increased existential risk is a good idea in the first place. However, spontaneously bringing up the issue on a twitter thread is unlikely to be optimal. At the very least, it should be beneficial to discuss the meta question, i.e., how we should or shouldn't talk about this. With that in mind, here are three things I would like to understand better:
- Concretely speaking, should we be hesitant to talk about this? If so, what kind of discussions are okay?
And -- conditional on discussing them being a good idea:
-
What is the more detailed story of how the "nascent spirit of cooperation" has degraded or changed since the inception of OpenAI?
-
What interventions are possible here, if any? (Is it really that difficult to organize some kind of outreach to Elon to try and reverse some of the effects? Naively speaking, my impression has been that our community is sufficiently well connected to do this, and that Elon is amenable to arguments.)
I'm less interested in estimating the total impact of any specific person.
I am not exactly here to say that DeepMind is that much better! :) One thing I dislike about the OP is that it makes it seem like the problem is specifically with OpenAI compared to other companies. If OpenAI came first and then Elon went and founded DeepMind that would approximately just as bad, or even slightly worse.
I agree that maybe an arms race was inevitable, in which case founding OpenAI maybe wasn't a bad thing after all. Maybe. But maybe not.
It's true that OpenAI had some great safety researchers. Now most of them have quit. (There are still some that remain). But they probably could have got jobs at DeepMind, so this isn't relevant to evaluating Elon's decision.
Also, there's the whole openness ideal/norm. Terrible idea, for reasons various people (e.g. Scott Alexander) said at the time. (I can try to remember what the post was called if you like... it made the same point as Yudkowsky here, if we haven't solved alignment yet and we give AI to everyone then we are killing ourselves. If we have solved alignment, great, but that's the difficult part and we haven't done that yet. That point and a few others.)