In the discussion about AI-based vs. upload-based singularities, and the expected utility of pushing for WBE (whole-brain emulation) first, has it been taken into account that an unfriendly AI is unlikely to do something worse than wiping out humanity, while the same isn't necessarily true in an upload-based singularity? I haven't been able to find discussion of this point, yet (unless you think that Robin's Hardscrapple Frontier scenario would be significantly worse than nonexistence, which it doesn't feel like, to me).
[ETA: To be clear, I'm not trying to argue anything at this point, I'm honestly asking for more info to help me figure out how to think about this.]
In the discussion about AI-based vs. upload-based singularities, and the expected utility of pushing for WBE (whole-brain emulation) first, has it been taken into account that an unfriendly AI is unlikely to do something worse than wiping out humanity, while the same isn't necessarily true in an upload-based singularity?
"Yes" in the sense that people are aware of the argument, which goes back at least as far as Vernor Vinge, 1993, but "no" in the sense that there are also arguments that it may not be highly unlikely that a failed att...
Previously: round 1, round 2, round 3
From the original thread:
Ask away!