Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
- bio/nano-tech disaster
- Malthusian upload scenario
- highly destructive war
- bad memes/philosophies spreading among humans or posthumans and overriding our values
- upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support
Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.
SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)
Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe to gain extra resources to do the task 'sooner' by wallclock time, it seems very dubious that by mere accident it will have proper symbol grounding, won't wirelead (i.e. would privilege the solutions that don't involve just stopping said clock), etc etc. Same goes for other practical AIs, even the evil ones that would e.g. try to take over internet.
You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?