After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much.
That's not the way my subjective anticipation works, so the assertion of uncontroversialness is premature. I anticipate that after step one I have a 100% chance of being the copy, and a 100% chance of being the original. (Which is to say, both of those individuals will remember my anticipation.)
Here’s another, possibly more general, argument against subjective anticipation.
Consider the following thought experiment. You’re told that you will be copied once and then the two copies will be randomly labeled A and B. Copy A will be given a button with a choice: either push the button, in which case A will be tortured, or don’t push it, in which case copy B will be tortured instead, but for a longer period of time.
From your current perspective (before you’ve been copied), you would prefer that copy A push the button. But if A anticipates any subjective experiences, clearly it must anticipate that it would experience being tortured if and only if it were to push the button. Human nature is such that a copy of you would probably not push the button regardless of any arguments given here, but let’s put that aside and consider what ideal rationality says. I think it says that A should push the button, because to do otherwise would be to violate time consistency.
If we agree that the correct decision is to push the button, then to reach that decision A must (dis)value any copy of you being tortured the same as any other copy, and its subjective anticipation of experiencing torture en...
Personal identity/anticipated experience is a mechanism through which a huge chunk of preference is encoded in human minds, on an intuitive level. A lot of preference is expressed in terms of "future experience", which breaks down once there is no unique referent for that concept in the future. Whenever you copy human minds, you also copy this mechanism, which virtually guarantees lack of reflective consistency in preference in humans.
Thought experiments with mind-copying effectively involve dramatically changing the agent's values, but don't emphasize this point, as if it's a minor consideration. Getting around this particular implementation, directly to preference represented by it, and so being rational in situations of mind-copying, is not something humans are wired to be able to do.
Are you convinced yet there is something wrong with this whole business of subjective anticipation?
I'm not sure what this "whole business of ... anticipation" has to do with subjective experience.
Suppose that, a la Jaynes, we programmed a robot with the rules of probability, the flexibility of recognizing various predicates about reality, and the means to apply the rules of probability when choosing between courses of action to maximize a utility function. Let's assume this utility function is implemented as an internal register H which is inc...
For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality... If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.
I know p-zombies are unpopular around here, so maybe by 'equivalent' you merely meant 'equivalently wacky', but it's worth noting that th...
Upvoted, but the Boltzmann problem is that it casually looks like the vast majority of subpatterns that match a given description ARE Boltzmann Brains. After all, maxentropy is forever.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
The way subjective probability is usually modeled, there is this huge space of possibilities. And there is a measure defined over it. (I'm not a mathematician, so I may be using the wron...
I wrote a response to this post here:
In order to solve this riddle, we only have to figure out what happens when you've been cloned twice and whether the answer to this should be 1/3 or 1/4. The first step is correct, the subjective probability of being the original should be 1/2 after you've pressed the cloning button once. However, after we've pressed the cloning button twice, in addition to the agent's who existed after that first button press, we now have an agent that falsely remembers existing at that point in time.
Distributing th...
Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?
I would much rather make a billion clones of myself whenever I experience great sex with a highly desirable partner. Point: making the clones to experiencing the lottery is about the experience and not the lottery. I'm not sure I particularly want to have orgasmic-...
How I Learned to Stop Worrying and Love the Anthropic Trilemma
My impression was that the Anthropic Trilemma was Eliezer uncharacteristically confusing himself when reality itself didn't need to be.
...After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no
Can I be sure I will not wake up as Britney Spears tomorrow?
Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.
It seems like you're discussing two types of copying procedure, or could be. The Ebborian copying seems to strongly imply the possibility of waking up as the copy or original (I have no statement on the probabilities), but a "teleporting Kirk" style of copying doesn't seem to imply this. You're presumably not...
My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation.
Does anyone have a sense of what the lower limit is on meaningful probability estimates for individual anticipation? Right, like there should be some probability p(E) where, upon experiencing E, even a relatively sane and well-balanced person ought to predict that the actual state of the world is ~E, because p(I'm Crazy or I've Misunderstood) >> p(E).
More to the point, p(E) should be...
rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).
You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.
Basically, personal identity is not fu...
Interesting one. 100 hundred Joes, one 'original', some 'copies'.
If we copy Joe once, and let him be, he's 50% certainly original. If we copy the copy, Joe remains 50% certainly original, while status of copies does not change.
After the first copying process, we ended up with the original and a copy. 50% of the resulting sentient beings were original. If we do that again, again, we have two sentient beings, original and a new copy. Again, 50% chance for a random sentient byproduct of this copying process to be the original.
But there's something you didn't ...
So let's look what happens in this process.
t=1: You know that you are the original t=2: We create a clone in such a way that you don't know whether you are a clone or not. At this time you have a subjective probability of 50% of being a clone. t=3: We tell clone 1 that they are a clone. Your subjective probability of being a clone is now 0% since you were not informed that you were a clone. t=4: We create another clone that provides you with a subjective probability of being a clone of 50% t=5: Clone 2 finds out that they are a clone. Since you weren't tol...
I don't really get what you're saying.
The normal way of looking at it is that you are only going to be you in the future. The better way of looking at it is that an unknown person is equally likely any person during any period of a given length.
The results of the former don't work well. They lead to people preferentially doing things to help their future selves, rather than helping others. This is rather silly. Future you isn't you either.
I'm sorry, I didn't read the rest of your post after seeing the 0.5^99 estimate of the probability of being the original because the math looked very wrong to me, but I didn't know why. While I agree there is nothing controversial about saying that after one step you have a 50% chance of being the original, I'm pretty sure it is not true that you only have a 25% chance after two steps. Yes, if you are the original after step one, you have a 50% chance probability of still being the original after step two. So, I Oi is the probability of being the original ...
I believe in continuity of substance, not similarity of pattern, as the basis of identity. If you are the original, that is what you are for all time. You cannot wake up as the copy. At best, a new mind can be created with false beliefs (such as false memories, of experiences which did not happen to it). Do I still face a problem of "subjective anticipation"?
ETA: Eliezer said of the original problem, "If you can't do the merge without killing people, then the trilemma is dissolved." Under a criterion of physical continuity, you cannot ...
But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder.
I'm not comfortable with 'for what it is, namely'. I would be comfortable with 'see the act as murder'. I don't play 'moral reference class tennis'. Killing a foetus before it is born is killing a foetus before it is born (or abortion). Creating a copy then removing the original is creating a copy and then removing the original (or teleportation). Killing someone who wants to die is killing someone who wants to die (or euthanasia). Calling any of these things murder is not necessarily wrong but it is not a factual judgement it is a moral judgement. The speaker wants people to have the same kind of reaction that they have to other acts that are called 'murder'.
'Murder' is just more complex than that. So is 'killing' and so is 'identity'. You can simplify the concepts arbitrarily so that 'identity' is a property of a specific combination of matter if you want to but that just means you need to make up a new word to describe "that thing that looks, talks and acts like the same Kirk every episode and doesn't care at all that he gets de-materialised all the t...
How in the world does that not constitute murder?
Any plans Kirk had prior to his "original" being dematerialized are still equally likely to be carried out by the "copy" Kirk, any preferences he had will still be defended, and so on. Nothing of consequence seems to have been lost; an observer unaware of this little drama will notice nothing different from what he would have predicted, had Kirk traveled by more conventional means.
To say that a murder has been committed seems like a strained interpretation of the facts. There's a difference between burning of the Library of Alexandria and destroying your hard drive when you have a backup.
Currently, murder and information-theoretic murder coincide, for the same reasons that death and information-theoretic death coincide. When that is no longer the case, the distinction will become more salient.
I'll grant that by being sufficiently clever, you can probably reconcile quantum mechanics with whatever ontology you like. But the real question is: why bother? Why not take the Schroedinger equation literally? Physics has faced this kind of issue before -- think of the old episode about epicycles, for instance -- and the lesson seems clear enough to me. What's the difference here?
For what it's worth, I don't see the arbitrariness of collapse postulates and the arbitrariness of world-selection as symmetrical. It's not even clear to me that we need to worry about extracting "worlds" from blobs of amplitude, but to the extent we do, it seems basically like an issue of anthropic selection; whereas collapse postulates seem like invoking magic.
But in any case you don't really address the objection that
(e)verything is entangled with everything else, indirectly if not directly, and so all I could say is that the universe as a whole has identity across time.
Instead, you merely raise the issue of finding "individual worlds", and argue that if you can find manage to find an individual world, then you can say that that world has an identity that persists over time. Fair enough, but how does this help you rescue the idea that personal identity resides in "continuity of substance", when the latter may still be meaningless at the level of individual particles?
This 0.5^99 figure only appears if each copy bifurcates iteratively.
Rather than
1 becoming 2, becoming 3, becoming 4, ... becoming 100
We'd have
1 becoming 2, becoming 4, becoming 8, ... becoming 2^99
or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.
It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.
"Simultaneous?" a friend shakes his head afterwards when you tell the story. "No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate."
"But," you reply, "it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference."
Are you convinced yet there is something wrong with this whole business of subjective anticipation?
Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.
We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "is a virus alive" or "is a beehive a single organism or a group". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "at what point in development does a human become conscious". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like "which slit did the photon go through". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.
Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.
How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.
Can I be sure I will not wake up as Britney Spears tomorrow?
Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.
If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.
Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?
No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.
In the Sleeping Beauty problem, is 1/3 the correct answer?
Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.
Can I be sure I am probably not a Boltzmann brain?
Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.
What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.