or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.
It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.
"Simultaneous?" a friend shakes his head afterwards when you tell the story. "No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate."
"But," you reply, "it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference."
Are you convinced yet there is something wrong with this whole business of subjective anticipation?
Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.
We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "is a virus alive" or "is a beehive a single organism or a group". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "at what point in development does a human become conscious". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like "which slit did the photon go through". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.
Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.
How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.
Can I be sure I will not wake up as Britney Spears tomorrow?
Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.
If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.
Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?
No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.
In the Sleeping Beauty problem, is 1/3 the correct answer?
Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.
Can I be sure I am probably not a Boltzmann brain?
Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.
What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.
I'm not sure what this "whole business of ... anticipation" has to do with subjective experience.
Suppose that, a la Jaynes, we programmed a robot with the rules of probability, the flexibility of recognizing various predicates about reality, and the means to apply the rules of probability when choosing between courses of action to maximize a utility function. Let's assume this utility function is implemented as an internal register H which is incremented or decremented according to whether the various predicates are satisfied.
This robot could conceivably be equipped with predicates that allow for the contingency of having copies made of itself, copies which we'll assume to include complete records of the robot's internal state up to the moment of copying, including register H.
The question then becomes one of specifying what, precisely, is meant by maximizing the expected value of H, given the possibility of copying.
Suppose we want to know what the robot would decide given a copy-and-torture scenario as suggested by Wei Dai. The question of "what the robot would do" surely does not depend on whether the robot thinks of itself as rational, whether it can be said to have subjective anticipation, whether time consistency is important to it, and so on. These considerations are irrelevant to predicting the robot's behaviour.
The question of "what the robot would do" depends solely on what it formally means to have the robot maximize the expected value of H, since "the value of H" becomes an ambiguous specification from the moment we allow for copying.
(On the other hand, was that specification ever unambiguous to begin with?)
If the robot is programmed to construe "the value of H" as meaning what we might call the "indexical value" of H, that is, the value-held-by-the-present-copy, then it (or rather its A copy) would presumably act in the torture scenario as Wei Day claims most humans would act, and refuse to press the button. But since the "indexical value of H" is ill defined from the perspective of the pre-copying robot, with respect to the situation after the copy, the robot would err when making this decision prior to copying, and would therefore predictably exhibit what we'd call a time inconsistency.
If the robot is programmed to construe "the value of H" as the sum (or the average) of the indexical values of H for all copies of its state which are descendants of the state which is making the decision, then - regardless of when it makes the choice and regardless of which copy is A or B - it would decide as I have claimed a one-boxer would decide.(Though, working out these implications of the "choice machine" frame, I'm less sure than before of the relation between Wei Dai's scenario and Newcomb's problem.)
While writing the above, I realized - this is where I'm driving at with the parenthetical comment about ambiguity - that even in a world without copying you get to make plenty of non-trivial decisions about what it means, formally, to maximize the value of H. In particular, you could be faced with decisions you must make now but which will have an effect in the future and whose effect on H may depend on the value of H at that time. (There are plenty of real-life examples, which I'll leave as exercise for the reader.) Just how you program the robot to deal with those seems (as far as I can tell) underspecified by the laws of probability alone.
A shorter way of saying all the above is that if we taboo "anticipation", when predicting what a certain class of agent will do, we don't necessarily find that there is anything particularly strange about a predicate saying "the present state of the robot is copy N of M". What we find is that we might want to program the robot differently if we want it to deal in a certain way with the contingency of copying; that is unsurprising. We also find that subjective experience needn't enter the picture at all.