After having done a lot of research on the Sleeping Beauty Problem as it was the topic of my bachelor's thesis (philosophy), I came to the conclusion that anthropic reasoning is wrong in the Sleeping Beauty Problem. I will explain my argument (shortly) below:

The principle that Elga uses in his first paper to validate his argument for 1/3 is an anthropic principle he calls the Principle of Indifference:

"Equal probabilities should be assigned to any collection of indistinguishable, mutually exclusive and exhaustive events."

The Principle of Indifference is in fact a more restricted version of the Self-Indication Assumption:

"All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers."

Both principles are to be accepted a priori as they can not be attributed to empirical considerations. They are therefore vulnerable to counterarguments...

The counterargument:

Suppose that the original experiment is modified a little:

If the outcome of the coin flip is Heads, they wake Beauty up at exactly 8:00. If the outcome of the first coin flip is Tails, the reasearchers flip another coin. If it lands Heads they wake Beauty at 7:00, if Tails at 9:00. That means that when Beauty wakes up she can be in one of 5 situations:

Heads and Monday 8:00

Tails and Monday 7:00

Tails and Monday 9:00

Tails and Tuesday 7:00

Tails and Tuesday 9:00

Again, these situations are mutually exclusive, indistinguishable and exhaustive. Hence thirders are forced to conclude that P(Heads) = 1/5.

Thirders might object that the total surface area under the probability curve in the Tails-world would still have to equal 2/3, as Beauty is awakened twice as many times in the Tails-world as in the Heads-world. They are then forced to explain why temporal uncertainty regarding an awakening (Monday or Tuesday) is different from temporal uncertainty regarding the time (7:00 or 9:00 o’clock). Both classify as temporal uncertainties within the same possible world, what could possibly set them apart?

An explanation could be that Beauty is only is asked for her credence in Heads during an awakening event, regardless of the time, and that such an event occurs twice in the Tails-world. That is, out of the 4 possible observer-moments in the Tails-world there are only two in which she is interviewed. That means that simply the fact that she is asked the same question twice is reason enough for thirders to distribute their credence, and it is no longer about the number of observer moments. So if she would be asked the same question a million times then her credence in Heads would drop to 1/1000001!

We can magnify the absurdity of this reasoning by imagining a modified version of the Sleeping Beauty Problem in which a coin is tossed that always lands on Tails. Again, she is awakened one million times and given an amnesia-inducing potion after each awakening. Thirder logic would lead to Beauty’s credence in Tails being 1/1000000, as there are one million observer-moments where she is asked for her credence within the only possible world; the Tails-world. To recapitulate: Beauty is certain that she lives in a world where a coin lands Tails, but due to the fact that she knows that she will answer the same question a million times her answer is 1/1000000. This would be tantamount to saying that Mt. Everest is only 1m high when knowing it will be asked 8848 times! It is very hard to see how amnesia could have such an effect on rationality.

Conclusion:

The thirder argument is false. The fact that there are multiple possible observer-moments within a possible world does not justify dividing your credences equally among these observer-moments, as this leads to absurd consequences. The anthropic reasoning exhibited by the Principle of Indifference and the Self-Indication Assumption cannot be applied to the Sleeping Beauty Problem and I seriously doubt if it can be applied to other cases...

The problem with the Sleeping Beauty Problem, is that probability can be thought of as a rate: #successes per #trials. But this problem makes #trials a function of #successes, introducing what could be called a feedback loop into this rate calculation, and fracturing our concepts of what the terms mean. All of the analyses I've seen struggle to put these fractured meanings back together, without fully acknowledging that they are broken. MrMind comes closer to acknowledging it than most, when he says "'A fair coin will be tossed,' in this context, will mean different things for different people."

But this fractured terminology can be overcome quite simply. Instead of one volunteer, use four.

Each will go through a similar experience where they will be woken at least once and maybe twice, on Monday and/or Tuesday, depending on the result of the same fair coin flip.

All four will be wakened both days with the following exceptions: SB1 will be left asleep on Monday if Heads is flipped. SB2 will be left asleep on Monday if Tails is flipped. SB3 will be left asleep on Tuesday if Heads is flipped. And SB4 will be left asleep on Tuesday if Tails is flipped. Note that SB3's schedule corresponds to the original version of the problem.

This way, three of the volunteers will be wakened on Monday. Two of those will be wakened on again Tuesday, while the third will be left asleep and be replaced by the one who slept through Monday. And each has the same chance to be wakened just once.

Put the three in a room together, and allow them to discuss anything EXCEPT the coin result and day that they would sleep through. Ask each for their confidence in the assertion that she will be wakened just once during the experiment.

No matter what day it is, or how the coin landed, the assertion will be true for one of the three awake volunteers, and false for the other two. So their confidences should sum to 1. No matter what combination of day and result each was assigned to sleep through, each has the same information upon which to base her confidence. So their confidences should be the same.

The only possible solution is that the confidences should all be 1/3. If, instead, SB3 is just told about the other three volunteers, but never meets them, she can still reason the same way and get the answer 1/3. And since "I, SB3, will be wakened only once" is equivalent to "the fair coin landed Heads," our original volunteer can give the same answer.