So, that "arbitrary general AI" is not an agent? It's going to be tool AI? I'm not quite sure how do you envisage it being smart enough to do all that you want it to do (e.g. deal with an angsty teenager: "I want the world to BURN!") and yet have no agency of its own and no system of values.
lower bound for outcome of intelligence explosion
Lower bound in which sense? A point where the intelligence explosion will stop on its own? Or one which the humans will be able to enforce? Or what?
The idea is that if the problem of consciousness is solved (which is admittedly a tall order), "make all consciousness in the universe reflect this particular VR utopia with these particular human brains and evolve it faithfully from there" becomes a formalizable goal, akin to paperclips, which you can hand to an unfriendly agent AI. You don't need to solve all the other philosophical problems usually required for FAI. Note that solving the problem of consciousness is a key requirement, you can't just say "simulate these uploaded brains in t...
I think I've come up with a fun thought experiment about friendly AI. It's pretty obvious in retrospect, but I haven't seen it posted before.
When thinking about what friendly AI should do, one big source of difficulty is that the inputs are supposed to be human intuitions, based on our coarse-grained and confused world models. While the AI's actions are supposed to be fine-grained actions based on the true nature of the universe, which can turn out very weird. That leads to a messy problem of translating preferences from one domain to another, which crops up everywhere in FAI thinking, Wei's comment and Eliezer's writeup are good places to start.
What I just realized is that you can handwave the problem away, by imagining a universe whose true nature agrees with human intuitions by fiat. Think of it as a coarse-grained virtual reality where everything is built from polygons and textures instead of atoms, and all interactions between objects are explicitly coded. It would contain player avatars, controlled by ordinary human brains sitting outside the simulation (so the simulation doesn't even need to support thought).
The FAI-relevant question is: How hard is it to describe a coarse-grained VR utopia that you would agree to live in?
If describing such a utopia is feasible at all, it involves thinking about only human-scale experiences, not physics or tech. So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk. Then we could launch a powerful AI aimed at rebuilding reality to match it (more concretely, making the world's conscious experiences match a specific coarse-grained VR utopia, without any extra hidden suffering). That's still a very hard task, because it requires solving decision theory and the problem of consciousness, but it seems more manageable than solving friendliness completely. The resulting world would be suboptimal in many ways, e.g. it wouldn't have much room for science or self-modification, but it might be enough to avert AI disaster (!)
I'm not proposing this as a plan for FAI, because we can probably come up with something better. But what do you think of it as a thought experiment? Is it a useful way to split up the problem, separating the complexity of human values from the complexity of non-human nature?