This is a very interesting idea!
Let me try an example to see if I've got it right. Humans think that it is wrong to destroy living things, but okay to destroy non-living things. But in physics the line between living and non-living is blurry. For example a developing embryo goes from non-living to living in a gradual way; hence the abortion debate. The AI is acting on our behalf and so it also wants to preserve life. But this is difficult because "life" doesn't have a clear boundary. So it fixes this problem by ensuring that every object in the simulation is either alive or non-alive. When people in the simulation become pregnant it looks and feels to them as though they have a growing baby inside them, but in fact there is no child's brain outside the simulation. At the moment they give birth the AI very quickly fabricates a child-brain and assigns it control of the simulated baby. This means that if someone decides to terminate their pregnancy then they can be assured that they are not harming a living thing (this is hypothetical because presumably the simulation is utopic enough that abortions are never necessary). After the child is born then it definitely is living and the people in the simulation know that they have to act to protect it.
Is that the right idea?
Yes! I was hoping that the post would provoke ideas like that. It's a playground for thinking about what people want, without distractions like nanotech etc.
I think I've come up with a fun thought experiment about friendly AI. It's pretty obvious in retrospect, but I haven't seen it posted before.
When thinking about what friendly AI should do, one big source of difficulty is that the inputs are supposed to be human intuitions, based on our coarse-grained and confused world models. While the AI's actions are supposed to be fine-grained actions based on the true nature of the universe, which can turn out very weird. That leads to a messy problem of translating preferences from one domain to another, which crops up everywhere in FAI thinking, Wei's comment and Eliezer's writeup are good places to start.
What I just realized is that you can handwave the problem away, by imagining a universe whose true nature agrees with human intuitions by fiat. Think of it as a coarse-grained virtual reality where everything is built from polygons and textures instead of atoms, and all interactions between objects are explicitly coded. It would contain player avatars, controlled by ordinary human brains sitting outside the simulation (so the simulation doesn't even need to support thought).
The FAI-relevant question is: How hard is it to describe a coarse-grained VR utopia that you would agree to live in?
If describing such a utopia is feasible at all, it involves thinking about only human-scale experiences, not physics or tech. So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk. Then we could launch a powerful AI aimed at rebuilding reality to match it (more concretely, making the world's conscious experiences match a specific coarse-grained VR utopia, without any extra hidden suffering). That's still a very hard task, because it requires solving decision theory and the problem of consciousness, but it seems more manageable than solving friendliness completely. The resulting world would be suboptimal in many ways, e.g. it wouldn't have much room for science or self-modification, but it might be enough to avert AI disaster (!)
I'm not proposing this as a plan for FAI, because we can probably come up with something better. But what do you think of it as a thought experiment? Is it a useful way to split up the problem, separating the complexity of human values from the complexity of non-human nature?