(Unless you mind being simulated, in which case at least you'll never know.)
If I paid you to extend the lives of cute puppies, and instead you bought video games with that money but still sent me very convincing pictures of cute puppies that I had "saved", then you have still screwed me over. I wasn't paying for the experience of feeling that I had saved cute puppies -- I was paying for an increase in the probability of a world-state in which the cute puppies actually lived longer.
Tricking me into thinking that the utility of a world state that I inhabit is higher than it actually is isn't Friendly at all.
I, on the other hand, (suspect) I don't mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?
I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don't think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.
1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY's position. Error deliberately preserved for educational purposes.)
2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)
3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?
4. Are there other values that, if we traded them off, might make MFAI much easier?
5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?