If you were a utilitarian, then why would you want to risk creating an AGI that had the potential to be an existential risk, when you could eliminate all suffering with the advent of WBE (whole brain emulation) and hence virtual reality (or digital alteration of your source code) and hence utopia? Wouldn't you want to try to prevent AI research and just promote WBE research? Or is it that AGI is more likely to come before WBE and so we should focus our efforts on making sure that the AGI is friendly? Or maybe uploading isn't possible for technological or philosophical reasons (substrate dependence)?
Is there a link to a discussion on this that I'm missing out on?
This doesn't work reliably enough. You need just one failure, and actually convincing (as opposed even to eliciting an ostensible admission of having been convinced) is really difficult. A serious complication is that it's not possible to "make AGI Friendly", most AGI designs can't be fixed without essentially discarding everything, and so people won't be moved deeply enough to kill their mind baby, they would instead raise up defenses against offending arguments, failing to understand the point and coming up with rationalizations that claim that whatever they are doing already happens to be Friendly (perhaps with minor modifications). Just look at Goertzel (see my comment).
Good point. Do you know if SIAI is planning on trying to build the first AGI? Isn't the only other option to try to persuade others?
Also, I don't really know too much about the specifics of AGI designs. Where could I learn more? Can you back up the claim that "most AGI designs can't be fixed without essentially discarding everything"?