SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.
Not clear to me either that unfriendly AI is the greatest risk, in the sense of having the most probability of terminating the future (though "resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues; and "world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed).
But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).
But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest.
A stably benevolent stable world government/singleton could take its time solving AI, or inching up to it with biological and culture intelligence enhancement. From our perspective we should count that as almost a maximal win in terms of existential risks.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.