agents maximizing different types of utility function will behave as if having different 'probabilities', this explains conflicting answers on problems like this and some others.
But there is a real, objective probability that can be proven, and it has nothing to do with SB's subjective, anthropic probability. Rather, the number of wakeups is greater than the number of experiments. If we go based on # of experiments (ie, how many experiments does SB entirely win or lose) halfers would be right. If we go per wakeup, then thirders are right.
In the original formulation of the problem, we assume we are on an unknown wakeup and SB should bet. In that case, tails is more likely than heads. That's what this experiment hopes to demonstrate - that the Sleeping Beauty problem is not really a 'de se' problem so much as is often assumed, and that most of the anthropic principle stuff can be discarded.
There aren't really conflicting answers, if you check out more recent papers you'll find the consensus has solidified to the point where the debate is largely between 'thirderism is provably and objectively the correct position' and 'well there are still philosophical issues with that' and then new papers come out to address those issues, and so on.
At least, that's the overview I've gotten of the situation in the brief process of making these simulations, so please do school me if I'm totally wrong!
But there is a real, objective probability that can be proven, and it has nothing to do with SB's subjective, anthropic probability. Rather, the number of wakeups is greater than the number of experiments. If we go based on # of experiments (ie, how many experiments does SB entirely win or lose) halfers would be right. If we go per wakeup, then thirders are right.
In the original formulation of the problem, we assume we are on an unknown wakeup and SB should bet. In that case, tails is more likely than heads. That's what this experiment hopes to demonstrate - that the Sleeping Beauty problem is not really a 'de se' problem so much as is often assumed, and that most of the anthropic principle stuff can be discarded.
There aren't really conflicting answers, if you check out more recent papers you'll find the consensus has solidified to the point where the debate is largely between 'thirderism is provably and objectively the correct position' and 'well there are still philosophical issues with that' and then new papers come out to address those issues, and so on.
At least, that's the overview I've gotten of the situation in the brief process of making these simulations, so please do school me if I'm totally wrong!