"resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues;
I mean "existential risk" in a broad sense.
Suppose we run out of a source of, oh, say, electricity too fast to find a substitute. Then we would be forced to revert to a preindustrial society. This would be a permanent obstruction to technological progress - we would have no chance of creating a transhuman paradise or populating the galaxy with happy sentient machines and this would be an astronomical waste.
Similarly if we ran out of any number of things (say, one of the materials that's currently needed to build computers) before finding an adequate substitute.
"world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed.
My understanding is that a large scale nuclear war could seriously damage infrastructure. I could imagine this preventing technological development as well.
But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).
On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.
On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.
Yes.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.