or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.
It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.
"Simultaneous?" a friend shakes his head afterwards when you tell the story. "No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate."
"But," you reply, "it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference."
Are you convinced yet there is something wrong with this whole business of subjective anticipation?
Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.
We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "is a virus alive" or "is a beehive a single organism or a group". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "at what point in development does a human become conscious". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like "which slit did the photon go through". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.
Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.
How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.
Can I be sure I will not wake up as Britney Spears tomorrow?
Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.
If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.
Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?
No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.
In the Sleeping Beauty problem, is 1/3 the correct answer?
Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.
Can I be sure I am probably not a Boltzmann brain?
Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.
What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.
rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).
You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.
Basically, personal identity is not fundamental. For practical purposes, there are various kinds of effective probabilities. There is no actual randomness involved.
It is a mistake to work with 'probabilities' directly. Because the sum is always normalized to 1, 'probabilities' deal (in part) with global information, but people easily forget that and think of them as local. The proper quantity to use is measure, which is the amount of consciousness that each type of observer has, such that effective probability is proportional to measure (by summing over the branches and normalizing). It is important to remember that total measure need not be conserved as a function of time.
As for the bottom line: If there are 100 copies, they all have equal measure, and for all practical purposes have equal effective probability.