or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.
It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.
"Simultaneous?" a friend shakes his head afterwards when you tell the story. "No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate."
"But," you reply, "it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference."
Are you convinced yet there is something wrong with this whole business of subjective anticipation?
Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.
We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "is a virus alive" or "is a beehive a single organism or a group". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "at what point in development does a human become conscious". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like "which slit did the photon go through". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.
Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.
How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.
Can I be sure I will not wake up as Britney Spears tomorrow?
Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.
If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.
Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?
No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.
In the Sleeping Beauty problem, is 1/3 the correct answer?
Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.
Can I be sure I am probably not a Boltzmann brain?
Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.
What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.
The way subjective probability is usually modeled, there is this huge space of possibilities. And there is a measure defined over it. (I'm not a mathematician, so I may be using the wrong terminology, but what I mean is that every 'sufficiently nice' subset of this set of possibilities has a number attached which behaves something like an area for that subset of the space.)
And then, in this model, the probability of some proposition is the measure of the subset where the proposition is true divided by the measure of the whole set. Numerator and denominator. And then each time you learn something, you throw away all of the points in that space that are no longer possible. So, you have typically decreased (never increased) both numerator and denominator. Do the division again and get the new updated probabilities. The space of all possibilities only loses points and measure, it never gains.
But I am not so sure this rule still applies when copying is involved. I think that each time you copy, you need to duplicate the subjective space of possibilities. The original space covered the measures of possibilities from one subjective viewpoint. At the point of copying, that space is duplicated because you now have two viewpoints. Initially, both original and copy are unsure which half of the space is theirs. But when they find out, they each throw out half of the doubled space. And then, as they learn more, possibilities are thrown away from one or the other of the spaces and each one updates to his own subjective probabilities.
So how does this apply to the copying scenario above? Start with one universe. Copy it when you copy the person. Produce a second copy when you produce the second copy of the person. Produce the 99th copy of subjective reality when you produce the 99th copy of the person. If at any stage, one of these persons learns for sure which copy is his, then he can prune his own subjective universe back to the original size.
So, if the protocol is that after each copying, the copy is told that he is a copy and the original is told that he is the original, then before any copying, the person should anticipate being told "You are original" N times, where N is between 0 and 99 inclusive. And he should attach equal probability to each of those events. That is, he should be 99 to 1 sure he will be the original the first time, 98 to one sure the second time, etc.
Forgive me if this is already known as one of the standard approaches to the problem.
Interesting! So you propose to model mind copying by using probabilities greater than 1. I wonder how far we can push this idea and what difficulties may arise...