This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.
Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more 'measure' than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.
Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we're going to reason that some fundamental law causes everything to exist, I don't see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.
I'm pretty terrible at math, so please try to forgive me if this sounds wrong. Take the 'density' of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn't only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.
So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it's clear that our own 'physical' measure is hopelessly lower than our simulated measure.
Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.
As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn't any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.
I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the 'real' you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes - we don't know what is the measure of existence. We also can't predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.