Simulating the people you interact with in each simulation to a strong enough approximation of reality means you're creating tons of suffering people for each one who has an awesome life, even if a copy of each of those people is living a happy life in their own sim. I don't think I would want a bunch of copies of me being unhappy even if I know one copy of me is in heaven.
I don't think that you need an actual human mind to simulate being a mind to stupid humans. (I.e. pass the Turing test.)
I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don't think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.
1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY's position. Error deliberately preserved for educational purposes.)
2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)
3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?
4. Are there other values that, if we traded them off, might make MFAI much easier?
5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?