Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
- bio/nano-tech disaster
- Malthusian upload scenario
- highly destructive war
- bad memes/philosophies spreading among humans or posthumans and overriding our values
- upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support
Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.
SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)
I expect losses of technological capability to be recovered with high probability.
On what timescale?
I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn't look much brighter than that -- let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can't see why we would prefer the latter to (say) a civilization emergi... (read more)