If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs.
Firstly, this isn't a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.
Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don't vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.
This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.
I
When preferences are selfless, anthropic problems are easily solved by a change of perspective. For example, if we do a Sleeping Beauty experiment for charity, all Sleeping Beauty has to do is follow the strategy that, from the charity's perspective, gets them the most money. This turns out to be an easy problem to solve, because the answer doesn't depend on Sleeping Beauty's subjective perception.
But selfish preferences - like being at a comfortable temperature, eating a candy bar, or going skydiving - are trickier, because they do rely on the agent's subjective experience. This trickiness really shines through when there are actions that can change the number of copies. For recent posts about these sorts of situations, see Pallas' sim game and Jan_Ryzmkowski's tropical paradise. I'm going to propose a model that makes answering these sorts of questions almost as easy as playing for charity.
To quote Jan's problem: