How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?
Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.
If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks
Yes.
Or is this sort of thing more what the FHI does?
They spend more time on it, relatively speaking.
FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI)
Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.
However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.
What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?
Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.
SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)