If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia? Wouldn't you want to try to prevent AI research and just promote WBE research? Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly? Or maybe uploading isn't possible for technological or philosophical reasons (substrate dependence)?
Is there a link to a discussion on this that I'm missing out on?
Your post may not go far enough.
I think that if you were a utilitarian of this sort, you'd want to take uploaded minds and reprogram them so they were super fulfilled all the time by their own standards, even if they were just sitting in a box for eternity. According to that view, making a FAI would be a HUGE missed opportunity, since it wouldn't do that.
How did it not go far enough? What would you like me to add?
The could be super fulfilled doing other things as well. Some people (I think EY is included in this group) wouldn't want to just sit in a box for eternity. However, they could still be super fulfilled by altering their hedonic set-point digitally.
There were too many pronouns for me to understand what you were talking about. Which view? A... (read more)