In a previous post, I argued self-locating probabilities are not valid concepts. Many comments want me to use examples with solid numbers and bets. So here is the example that is essentially the Sleeping Beauty Problem with a small twist to highlight the problems of self-locating probability.
Cloning with Memory (with a coin toss)
The experiment is almost the same as before. Tonight during your sleep some mad scientist will scan your body at a molecular level to create a highly accurate clone. The process is highly advanced so the created person will accurately retain the original's memory to a degree not discernible by human cognition. So after waking up, there is no way to tell whether you are the Original or the Clone. However, the mad scientist will only perform the cloning if a fair coin toss lands on Tails (He will scan you regardless). Now, after waking up ask yourself this: "what is the probability that I am the Original?" Also, "What is the probability that the coin landed on Heads?"
Possible Answers
Let me present my answer first so it is out of the way: The probability of Heads is 1/2 since it is a fair coin toss. And the "probability I am the Orignal" is not a valid concept. "I" is an identification not based on anything but my first-person perspective. Whether "I" am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating "I" to a random sample of some sort.
Perhaps the more popular answer would be to say the probability that I am the Orignal is 2/3. Whereas the probability for Heads is 1/3. This would correspond to Thrider's camp in the Sleeping Beauty Problem. The rationale for it may not be the same for all Thirders. But it typically follows the Self-Indication Assumption, and finding myself exists eliminates the possibility that I am the Clone and the coin landed Heads.
Another camp would be saying the probability of Heads is 1/2, and the probability that I am the Orignal is 3/4. This corresponds to the Halfer camp in the Sleeping Beauty problem. This camp endorses the "no new information" argument. But have different reasons regarding how to update given self-locating information.
If You Say P(Heads)=1/3
Say the Mad scientist wants to encourage people to participate in his experiment. So he decides to give 2 gold bars to each copy after waking him up. He will always offer each copy a bet. You can give up these 2 bars for a chance to win 5 if the coin landed Heads. All this is disclosed to you. There is no new information when being offered the bars and bet. Say your objective is simple: "I just want more gold" (and risk-neutral). Given you think P(heads)=1/3, would you take the bet? If you would take the bet, what makes your decision not reflecting the probability?
If You Say P(Heads)=1/2
It faces the same trouble as pointed out by Elga in the sleeping beauty problem. What happens when I learn that I am the Orignal. i.e. what is P(heads|I am the Original)? Standard Bayesian update would give the probability of Heads is 2/3. But that can't be right. For this experiment, the Original and the Clone do not have to be waked up at the same time. The mad scientist could wake up the Orignal first. In fact, the coin can be tossed after it. For dramatic effect, after telling you you are the original, the mad scientist can give you the coin and let you toss it. It seems absurd to say the probability for Heads is anything but 1/2. Why is does the probability for Heads remains unchanged after learning you are the Orignal? How come Bayesian update is not applicable for self-locating information?
I think we've had this discussion before, but let me try one more time...
You say, "And the "probability I am the Orignal" is not a valid concept. "I" is an identification not based on anything but my first-person perspective. Whether "I" am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating "I" to a random sample of some sort."
But to me, this means throwing out the whole notion of probability as a subjective measure of uncertainty. Perhaps you're fine with that, but it also means throwing out all use of probability in scientific applications, such as evaluating the probability that a drug will work and/or have side effects - because the practical use of such evaluations is to conclude that "I" will probably be cured by that drug, which is a statement you have declared meaningless. Maybe you're assuming that some "additional postulate" will fix that, but if so, I don't see why something similar wouldn't also render the probability that "I" am the Original in your problem meaningful.
I think an underlying problem here is an insistence on overly abstract thought experiments. You're assuming that the subject of the experiment cannot simply walk out the door of the room they're in and see whether they're in the same place where they went to sleep (in which case they're the Original), or in a different place. They can also do all sorts of other things, whose effects for good or bad may depend on whether or not they are the Original (before they figure this out). They will in general need some measure of uncertainty in making decisions of this sort - they can't simply say that self-locating probabilities are meaningless, when implicitly they will be using them to decide. This is all true even if they in fact decide to cooperate with the experimenter and do none of this.
The assumption that the experiment must proceed in the manner as it is abstractly described severs all connection between the answers being proposed and the real world. There is then nothing stopping anyone from proposing that the probability of Heads is 1/2, or 3/4, or 2/7 - since none of these have consequences - and similarly the probability of "I am the Original" can be anything you like, or be meaningless, if you treat the person making the judgement as an abstract entity constrained to do nothing but what they're supposed to do in the problem statement, rather than as a real person.
I don't think rejecting self-locating probability means totally rejecting probability as a measure of uncertainty. Because self-locating probability only applies to very specific anthropic problems. E.g.
- An incubator creates two observers, the first in a blue room and the second in a red room. Given I am one of the created observers but don't know if I am the first or the second. What is the probability that I will see blue when I turn on the lights?
- Some people put me and another person into two rooms. One Blue, one red but the process is random or unknown
... (read more)