Imagine that I write a computer program that starts by choosing a random integer W between 0 and 2. It then generates 10^(3W) random simple math problems, numbering each one and placing it in list P. It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.
In this case, being presented with a single math problem tells me nothing about the state of W - I expect it to do that in any case. Similarly, if I subsequently find out that I was shown P(50), that rules out W=0 and makes W=1 1,000 times more likely than W=2.
Given that W represents which world we're in, each math problem in P represents a unique person, and being presented with a math problem represents experiencing being that person or knowing that that person exists, the self indication assumption says that my model is flawed.
According to the self-indication assumption, my program needs to do an extra step to be a proper representation. After it generates a list of math problems, it needs to then choose a second random number, X, and present me with a math problem only if there's a math problem numbered X. In this case, being presented with a math problem or not does tell me something about W - I have a much higher chance of getting a math problem if W=2 and a much lower chance if W=0 - and finding out that the one math problem I was presented with was P(50) tells me much more about X than it does about W.
I don't see why this is a proper representation, or why my first model is flawed, though I suspect it relates to thinking about the issue in terms of specific people rather than any person in the relevant set, and I tend to get lost in the math of the usual discussions. Help?
To me it looks abandoned, not in progress. And it doesn't give any definite answer. And it's not clear to me whether it can be patched to give the correct answer and still be called "SSA" (i.e. still support some version of the Doomsday argument). For example, your proposed patch (using indistinguishable observers as the reference class) gives the same results as SIA and doesn't support the DA.
Anyway. We have a better way to think about anthropic problems now: UDT! It gives the right answer in my problem, and makes the DA go away, and solves a whole host of other issues. So I don't understand why anyone should think about SSA or Bostrom's approach anymore. If you think they're still useful, please explain.
When it comes to deciding how to act, I agree that the UDT approach to anthropic puzzles is the best I know. Thinking about anthropics in the traditional way, whether via SSA, SIA, or any of the other approaches, only makes sense if you want to isolate a canonical epistemic probability factor in the expected-utility calculation.