It seems like most other commenters so far don't share my opinion, but I view the above scenario as basically equivalent to wireheading, and consequently see it as only very slightly better than the destruction of all earth-originating intelligence (assuming the AI doesn't do anything else interesting). "Affecting-the-real-world" is actually the one value I would not want to trade off (well, obviously, I'd still trade it off, but only at a prohibitively expensive rate).
I'm much more open to trading off other things, however. For example, if we could get Failed Utopia #4-2 much more easily than the successful utopia, I'd say we should go for it. What specific values are the best to throw away in the pursuit of getting something workable isn't really clear, though. While I don't agree that if we lose one, we lose them all, I'm also not sure that anything in particular can be meaningfully isolated.
Perhaps the best (meta-)value that we could trade off is "optimality" - we should consider that if we see a way to design something stable that's clearly not the best we can do, we should nonetheless go with it if it's considerably easier than better options. For example, if you see a way to specify a particular pretty good future and have the AI build that without going into some failure mode, it might be better to just use that future instead of trying to have the AI design the best possible future.
"Affecting-the-real-world" is actually the one value I would not want to trade off
How are you defining "real world"? Which traits separate something real and meaningful from something you don't value? Is it the simulation? The separation from other beings? The possibility that the AI is deceiving you? Something I'm missing entirely?
(Personally I'm not at all bothered by the simulation, moderately bothered by the separation, and unsure how I feel about the deception.)
I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don't think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.
1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY's position. Error deliberately preserved for educational purposes.)
2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)
3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?
4. Are there other values that, if we traded them off, might make MFAI much easier?
5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?