I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries
The apparent contradiction is causing a a lot of confusion among people who haven't followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent statement), or to otherwise dismiss the worries as not totally serious.
I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles.
As an overview, the reasons I focus on are:
Their specific research isn’t actually risky
Belief that AGI is inevitable and more likely to go better if you personally are involved
Thinking AGI is far enough away that it makes sense to keep working on AI for now
Commitment to science for science sake
Belief that the benefits of AGI would outweigh even the risk of extinction
Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks
Belief that AGI is worth it, even if it causes human extinction
I'll also note that the piece isn't meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity.
If you're interested in reading more, you can follow the link here. And of course feel free to send the link to anyone who's confused by the current situation.
I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries
The apparent contradiction is causing a a lot of confusion among people who haven't followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent statement), or to otherwise dismiss the worries as not totally serious.
I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles.
As an overview, the reasons I focus on are:
I'll also note that the piece isn't meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity.
If you're interested in reading more, you can follow the link here. And of course feel free to send the link to anyone who's confused by the current situation.