A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?
I deny that this is meaningful. If there are two copies of me, both "have my consciousness". I fail to see any sense in which my consciousness must move to only one copy.
I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I'm dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.)
Then many copies of my consciousness will exist, some slowly dying each day.
I don't have any look-ahead required in my model at all.
Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn't require me to be awake, nor have knowledge of the random data?
OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don't see how simultaneously being in different branches makes sense from the qualia viewpoint.
Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.
And no, I'm currently unable to dissolve the hard problem of consciousness.