Is there really a way of simulating people with whom you interact extensively such that they wouldn't exist in much the same way that you would? In otherwords are p-zombie's possible, or more to the point are they a practical means of simulating a human in sufficient detail to fool a human level intellect.
You don't need to simulate them perfectly, just to the level that you don't notice a difference. When the simulator has access to your mind, that might be a lot easier than you'd think.
There's also no need to create p-zombies, if you can instead have a (non-zombie) AI roleplaying as the people. The AI may be perfectly conscious, without the people it's roleplaying as existing.
I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don't think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.
1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY's position. Error deliberately preserved for educational purposes.)
2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)
3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?
4. Are there other values that, if we traded them off, might make MFAI much easier?
5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?