Eliezer Yudkowsky writes on twitter:
Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival; previously there was a nascent spirit of cooperation, which Elon completely blew up to try to make it all be about who, which monkey, got the poison banana, and by spreading and advocating the frame that everybody needed their own "demon" (Musk's old term) in their house, and anybody who talked about reducing proliferation of demons must be a bad anti-openness person who wanted to keep all the demons for themselves.
Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.
Previously all the AGI people were at the same conference talking about how humanity was going to handle this together. Elon Musk didn't like Demis Hassabis, so he blew that up. That's the impact of his life. The end.
I've found myself repeatedly uncertain about what to make of OpenAI and their impact. The most recent LessWrong discussion that I'm aware of has happened on the post Will OpenAI-s work unintentionally increase existential risk, but the arguments for negative impact are different from the thing Eliezer named.
I'm also not entirely sure whether publicly debating sensitive questions like whether a person or organization accidentally increased existential risk is a good idea in the first place. However, spontaneously bringing up the issue on a twitter thread is unlikely to be optimal. At the very least, it should be beneficial to discuss the meta question, i.e., how we should or shouldn't talk about this. With that in mind, here are three things I would like to understand better:
- Concretely speaking, should we be hesitant to talk about this? If so, what kind of discussions are okay?
And -- conditional on discussing them being a good idea:
-
What is the more detailed story of how the "nascent spirit of cooperation" has degraded or changed since the inception of OpenAI?
-
What interventions are possible here, if any? (Is it really that difficult to organize some kind of outreach to Elon to try and reverse some of the effects? Naively speaking, my impression has been that our community is sufficiently well connected to do this, and that Elon is amenable to arguments.)
I'm less interested in estimating the total impact of any specific person.
Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won't be a good idea for the leading contenders to publish publicly, because then there'll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.