Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:
(Disclaimer: I personally don't worry about AI, am skeptical that AGI will happen in the next 100 years, am skeptical that AGI will take over Earth in under 100 years, but nonetheless recognize that these are more than 0% probable. I don't have a great mental model of why others disagree, but believe that it can be partly explained by software people being more optimistic than hardware people, since software people have experienced more amazing success in the past couple decades.)
If you think there's good information about bioengineered pandemics out there, what sources would you recommend?
Multiple LW surveys considered those to be a more likely Xrisk and if there would be a good way to spend Xrisk EA dollar I think it would be likely that the topic would get funding but currently there doesn't seem to be good targets.
Basically, because many of those other things have a large number of people already working on them, while a significant portion of all the AI risk researchers in the world are part of this community - this was even more so the case when Yudkowsky started Lesswrong. Also, a lot of people in this community have interest/skills in the computer science field, so applying that to AI is much less of a reach than them, say, learning biology so they can help stop pandemics.
A similar question (though just asking about climate change) was answered here.
What about pandemics, runaway climate change, etc.?
None of those other problems fights back. That makes AI scarier to me.
The other problems are worth thinking about, but AI seems most significant.
Let's say you're hiking in the mountains, and you find yourself crossing a sloped meadow. You look uphill and see a large form. And it's moving towards you!
Are you more scared if the form turns out to be a boulder or a bear? Why?
The boulder could roll over you and crush you. But if you get out of its path, it won't change course. Can't say the same for the bear.
Why does this community focus on AI over all other possible apocalypses? What about pandemics, runaway climate change, etc.?