I fail to see why the shards have to be perfectly isolate, in this scenario. It would seem plausible that the AI could automatically import all the changes made by my best friend in his simulation into mine, and vice versa; and more extensively, include bits and pieces of other "real actions" into my ongoing narrative. Ultimately, everyone in my universe could be "intermittently real" in proportion to which of their owned actions contributed to my utopia, and the rest of their screen time can be done by an AI stand-in that acted consistently with the way I like them to act. (For example, everyone on Twitter could be a real person in another simulation; me following them would start to leak their reality into mine).
This is sounding oddly familiar, but I can't put my finger on why.
This is somewhat similar to an idea I have called 'culture goggles' under which all interpersonal interactions go through a translation suite.
I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don't think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.
1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY's position. Error deliberately preserved for educational purposes.)
2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)
3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?
4. Are there other values that, if we traded them off, might make MFAI much easier?
5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?