Imagine that I write a computer program that starts by choosing a random integer W between 0 and 2. It then generates 10^(3W) random simple math problems, numbering each one and placing it in list P. It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.
In this case, being presented with a single math problem tells me nothing about the state of W - I expect it to do that in any case. Similarly, if I subsequently find out that I was shown P(50), that rules out W=0 and makes W=1 1,000 times more likely than W=2.
Given that W represents which world we're in, each math problem in P represents a unique person, and being presented with a math problem represents experiencing being that person or knowing that that person exists, the self indication assumption says that my model is flawed.
According to the self-indication assumption, my program needs to do an extra step to be a proper representation. After it generates a list of math problems, it needs to then choose a second random number, X, and present me with a math problem only if there's a math problem numbered X. In this case, being presented with a math problem or not does tell me something about W - I have a much higher chance of getting a math problem if W=2 and a much lower chance if W=0 - and finding out that the one math problem I was presented with was P(50) tells me much more about X than it does about W.
I don't see why this is a proper representation, or why my first model is flawed, though I suspect it relates to thinking about the issue in terms of specific people rather than any person in the relevant set, and I tend to get lost in the math of the usual discussions. Help?
After the discussion in my previous post I became quite certain that the world can't work as indicated by SSA (your model), and SIA is by far more likely. If you're the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50/50. There's just no way SSA is right on this one.
Bostrom talks about such paradoxes in chapter 9 of his book, then tries really hard to defend SSA, and fails. (You have to read and settle this for yourself. It's hard to believe Bostrom can fail. I was surprised.)
Also maybe it'll help if you translate the problem into UDT-speak, "probability as caring". Believing in SSA means you care about copies of yourself in little worlds much more than about your copies in big worlds. SIA means you care about them equally.
Now might be a good time to mention "full non-indexical conditioning", which I think is incontestably an advance on SSA and SIA.
To be sure, FNC still faces the severe problem that observer-moments cannot be individuated, leading (for instance) to variations on Sleeping Beauty where tails causes only a 'partial split' (like an Ebborian midway through dividing) and the answer is indeterminate. But this is no less of a problem for SSA and SIA than for FNC. The UDT approach of bypassing the 'Bayesian update' stage and going straight to the question 'what should I do?' is superior.