The doomsday argument says I have only a 10% chance of being within the first 10% of humans ever born, which gives nonzero information about when humanity will end. The argument has some problems with the choice of reference class; my favorite formulation (invented by me, I'm not sure if it's well-known) is to use the recursive reference class of "all people who are considering the doomsday argument with regard to humanity". But this is not the issue I want to discuss right now.
Imagine your prior says the universe can contain 10, 1000 or 1000000 humans, with probability arbitrarily assigned to these three options. Then you learn that you're the 50th human ever born. As far as I can understand, after receiving this information you're certain to be among the first 10% of humans ever born, because it's true in every possible universe where you receive such information. Also learning your index doesn't seem to tell you very much about the date of the doomsday: it doesn't change the relative probabilities of doomsday dates that are consistent with your existence. (This last sentence is true for any prior, not just the one I gave.) Is there something I'm missing?
I thought about SSA some more and came up with a funny scenario. Imagine the world contains only one person and his name is Bob. At a specified time Omega will or won't create 100 additional people depending on a coinflip, none of whom will be named Bob.
Case 1: Bob knows that he's Bob before the coinflip. In this case we can all agree that Bob can get no information about the coinflip's outcome.
Case 2: Bob takes an amnesia drug, goes to sleep, the coinflip happens and people are possibly created, Bob wakes up thinking he might be one of them, then takes a memory restoration drug. In this case SSA leads him to conclude that additional people probably didn't get created, even though he has the same information as in case 1.
Case 3: coinflip happens, Bob takes amnesia drug, then immediately takes memory restoration drug. SSA says this operation isn't neutral and Bob should switch from case 1 to case 2. Moreover, Bob can anticipate changing his beliefs this way, but that doesn't affect his current beliefs. Haha.
Bonus: what if Omega is spacelike separated from Bob?
The only way to rescue SSA is to bite the bullet in case 1 and say that Bob's prior beliefs about the coinflip's outcome are not 50/50; they are "shifted" by the fact that the coinflip can create additional people. So SSA allows Bob to predict with high confidence the outcome of a fair coinflip, which sounds very weird (though it can still be right). Note that using UDT or "big-world SSA" as in my other comment will lead to more obvious and "normal" answers.
ETA: my scenario suggests a hilarious way to test SSA experimentally. If many people use coinflips to decide whether to have kids, and SSA is true, then the results will be biased toward "don't have kids" because the doomsday wants to happen sooner and pushes probabilities accordingly :-)
ETA2: or you could kill or spare babies depending on coinflips, thus biasing the coins toward "kill". The more babies you kill, the stronger the bias.
ETA3: or you could win the lottery by precommitting to create many observers if you lose. All these scenarios make SSA and the DA look pretty bad.