An unfriendly AI would probably just kill us. An unfriendly em? A human wrote The 120 Days of Sodom.
Well. That's not an existential risk, but it would be bad if we had a sadistic upload in charge. But I think that if we had enough knowledge of neuroscience to create WBE, then we should be able to eliminate the pathologies of the mind that create deranged lunatics, sadists, and psychopaths. Who would want to be like that anyway, when the alternative is to live in a digitally created state of bliss? You could still be part of what you consider to be "reality", so that you wouldn't feel bad if you were in a "fake" virtual reality.
If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia? Wouldn't you want to try to prevent AI research and just promote WBE research? Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly? Or maybe uploading isn't possible for technological or philosophical reasons (substrate dependence)?
Is there a link to a discussion on this that I'm missing out on?