If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia? Wouldn't you want to try to prevent AI research and just promote WBE research? Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly? Or maybe uploading isn't possible for technological or philosophical reasons (substrate dependence)?
Is there a link to a discussion on this that I'm missing out on?
Good point. Do you know if SIAI is planning on trying to build the first AGI? Isn't the only other option to try to persuade others?
Also, I don't really know too much about the specifics of AGI designs. Where could I learn more? Can you back up the claim that "most AGI designs can't be fixed without essentially discarding everything"?