Imagine that I write a computer program that starts by choosing a random integer W between 0 and 2. It then generates 10^(3W) random simple math problems, numbering each one and placing it in list P. It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.
In this case, being presented with a single math problem tells me nothing about the state of W - I expect it to do that in any case. Similarly, if I subsequently find out that I was shown P(50), that rules out W=0 and makes W=1 1,000 times more likely than W=2.
Given that W represents which world we're in, each math problem in P represents a unique person, and being presented with a math problem represents experiencing being that person or knowing that that person exists, the self indication assumption says that my model is flawed.
According to the self-indication assumption, my program needs to do an extra step to be a proper representation. After it generates a list of math problems, it needs to then choose a second random number, X, and present me with a math problem only if there's a math problem numbered X. In this case, being presented with a math problem or not does tell me something about W - I have a much higher chance of getting a math problem if W=2 and a much lower chance if W=0 - and finding out that the one math problem I was presented with was P(50) tells me much more about X than it does about W.
I don't see why this is a proper representation, or why my first model is flawed, though I suspect it relates to thinking about the issue in terms of specific people rather than any person in the relevant set, and I tend to get lost in the math of the usual discussions. Help?
Then 'the observer' in your scenario doesn't correspond to anything that exists in the real world. After all, there is no epiphenomenal 'passenger' who chooses a person at random and watches events play out on the theatre of their mind.
Anthropic probabilities are meaningless without an epiphenomenal passenger. If p is "the probability of being person X" then what does "being person X" mean? Assuming X exists, the probability of X being X is 1. What about the probability of "me" being X? Well who am I? If I am X then the probability of me being X is 1. It's only if I consider myself to be an epiphenomenal passenger who might have ridden along with one of many different people that it makes sense to assign a value other than 0 or 1 to the probability of 'finding myself as X'.
To calculate anthropic probabilities requires some rules about how the passenger chooses who to 'ride on'. Yet it's impossible to state these rules without arbitrariness, in cases where there's no right way to count up observers and draw their boundaries. I think the whole idea of anthropic reasoning is untenable.
I basically agree. This particular case (and perhaps others, though I haven't checked) seems to be able to be formulated in non-anthropic terms, though. The observer not corresponding to anything in the real world shouldn't be a problem, I expect; a fair 6-sided die should have a 1/6 chance of showing 1 when rolled even if nobody's around to watch that happen.