or: How I Learned to Stop Worrying and Love the Anthropic Trilemma

Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)

So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.

After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.

Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!

You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regulations haven't caught up yet). However, they do have a prototype machine that can make all 99 copies simultaneously, thereby giving you your 1% chance.

It seems strange that such a minor change in the path leading to the exact same end result could make such a huge difference to what you anticipate, but the philosophical reasoning seems unassailable, and philosophy has a superb track record of predictive accuracy... er, well the reasoning seems unassailable. So you go ahead and authorize the extra payment to use the prototype system, and... your 1% chance comes up! You're still the original.

"Simultaneous?" a friend shakes his head afterwards when you tell the story. "No such thing. The Planck time is the shortest physically possible interval. Well if their new machine was that precise, it'd be worth the money, but obviously it isn't. I looked up the specs: it takes nearly three milliseconds per copy. That's into the range of timescales in which the human mind operates. Sorry, but your chance of ending up the original was actually 0.5^99, same as mine, and I got the cheap rate."

"But," you reply, "it's a fuzzy scale. If it was three seconds per copy, that would be one thing. But three milliseconds, that's really too short to perceive, even the entire procedure was down near the lower limit. My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation. Maybe it was some intermediate value, like one in a thousand or one in a million. Also, you don't know the exact data paths in the machine by which the copies are made. Perhaps that makes a difference."

Are you convinced yet there is something wrong with this whole business of subjective anticipation?

Well in a sense there is nothing wrong with it, it works fine in the kind of situations for which it evolved. I'm not suggesting throwing it out, merely that it is not ontologically fundamental.

We've been down this road before. Life isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "is a virus alive" or "is a beehive a single organism or a group". Mind isn't ontologically fundamental, so we should not expect there to be a unique answer to questions like "at what point in development does a human become conscious". Particles aren't ontologically fundamental, so we should not expect there to be a unique answer to questions like "which slit did the photon go through". Yet it still seems that I am alive and conscious whereas a rock is not, and the reason it seems that way is because it actually is that way.

Similarly, subjective experience is not ontologically fundamental, so we should not expect there to be unique answer to questions involving subjective probabilities of outcomes in situations involving things like copying minds (which our intuition was not evolved to handle). That's not a paradox, and it shouldn't give us headaches, any more than we (nowadays) get a headache pondering whether a virus is alive. It's just a consequence of using concepts that are not ontologically fundamental, in situations where they are not well defined. It all has to boil down to normality -- but only in normal situations. In abnormal situations, we just have to accept that our intuitions don't apply.

How palatable is the bullet I'm biting? Well, the way to answer that is to check whether there are any well-defined questions we still can't answer. Let's have a look at some of the questions we were trying to answer with subjective/anthropic reasoning.

Can I be sure I will not wake up as Britney Spears tomorrow?

Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.

If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.

Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result"?

No. The end result will still be that you are not the winner in more than one out of several million Everett branches. That is what we mean by 'winning the lottery', to the extent that we mean anything well-defined by it. If we mean something else by it, we are asking a question that is not well-defined, so we are free to make up whatever answer we please.

In the Sleeping Beauty problem, is 1/3 the correct answer?

Yes. 2/3 of Sleeping Beauty's waking moments during the experiment are located in the branch in which she was woken twice. That is what the question means, if it means anything.

Can I be sure I am probably not a Boltzmann brain?

Yes. I am the set of all subpatterns in the Tegmark multiverse that match a certain description. The vast majority of these are embedded in surrounding patterns that gave rise to them by lawful processes. That is what 'probably not a Boltzmann brain' means, if it means anything.

What we want from a solution to confusing problems like the essence of life, quantum collapse or the anthropic trilemma is for the paradoxes to dissolve, leaving a situation where all well-defined questions have well-defined answers. That's how it worked out for the other problems, and that's how it works out for the anthropic trilemma.

New Comment
91 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Cyan170

After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much.

That's not the way my subjective anticipation works, so the assertion of uncontroversialness is premature. I anticipate that after step one I have a 100% chance of being the copy, and a 100% chance of being the original. (Which is to say, both of those individuals will remember my anticipation.)

4rwallace
Right, I'm getting the feeling I was too focused on the section of the audience that subscribes to the theory of subjective anticipation against which I was arguing, and forgetting about the section that already doesn't :-)
3Wei Dai
Cyan, I gave another argument against subjective anticipation, which does cover the way your subjective anticipation works. Please take a look. (I'm replying to you here in case you miss it.)
3Cyan
Thanks for the link. When you write that it's an argument against subjective anticipation, I'm not sure what you are specifically arguing against. If you're just saying that my kind of subjective anticipation will lead to time-inconsistent decisions (and hence is irrational), I agree.
-1utilitymonster
I think this guy disagrees: Weatherson, Brian. Should We Respond to Evil with Indifference? Philosophy and Phenomenological Research 70 (2005): 613-35. Link: http://brian.weatherson.org/papers.shtml
9JGWeissman
I would prefer if, before I click on the link, the comment tells me something more than someone disagrees with Cyan on the internet. Good information to include would be the nature of the disagreement (what competing claim is made) and a summary of the reasoning that backs up that competing claim. I further note that your link points to a list of articles, none of which have the name you cited. This is not helpful.
1RobinZ
It's hidden in "Older Work" - you have to click on "Major Published Papers" to see it. But agreed on all other points.

Here’s another, possibly more general, argument against subjective anticipation.

Consider the following thought experiment. You’re told that you will be copied once and then the two copies will be randomly labeled A and B. Copy A will be given a button with a choice: either push the button, in which case A will be tortured, or don’t push it, in which case copy B will be tortured instead, but for a longer period of time.

From your current perspective (before you’ve been copied), you would prefer that copy A push the button. But if A anticipates any subjective experiences, clearly it must anticipate that it would experience being tortured if and only if it were to push the button. Human nature is such that a copy of you would probably not push the button regardless of any arguments given here, but let’s put that aside and consider what ideal rationality says. I think it says that A should push the button, because to do otherwise would be to violate time consistency.

If we agree that the correct decision is to push the button, then to reach that decision A must (dis)value any copy of you being tortured the same as any other copy, and its subjective anticipation of experiencing torture en... (read more)

Personal identity/anticipated experience is a mechanism through which a huge chunk of preference is encoded in human minds, on an intuitive level. A lot of preference is expressed in terms of "future experience", which breaks down once there is no unique referent for that concept in the future. Whenever you copy human minds, you also copy this mechanism, which virtually guarantees lack of reflective consistency in preference in humans.

Thought experiments with mind-copying effectively involve dramatically changing the agent's values, but don't emphasize this point, as if it's a minor consideration. Getting around this particular implementation, directly to preference represented by it, and so being rational in situations of mind-copying, is not something humans are wired to be able to do.

4Wei Dai
Morendil's comment made me realize that my example is directly analogous to your Counterfactual Mugging: in that thought experiment, Omega's coin flip splits you into two copies (in two different possible worlds), and like in my example, the rational thing to do, in human terms, is to sacrifice your own interests to help your copy. To me, this analogy indicates that it's not mind-copying that's causing the apparent value changes, but rather Bayesian updating. I tend to agree with you, but I note that Eliezer disagrees.
5Vladimir_Nesov
Locating future personal experience is possible when we are talking about possible futures, and not possible when we are talking about the future containing multiple copies at the same time. Only in the second case does the mechanism for representing preference breaks down. The problem is not (primarily) in failure to assign preference to the right person, it's in failure to assign it at all. Humans just get confused, don't know what the correct preference is, and it's not a question of not being able to shut up and calculate, as it's not clear what the answer should be, and how to find it. More or less the same problem as with assigning value to groups of other people: should we care more when there are a lot people at stake, or the same about them all, but less about each of them? ("Shut up and divide".) In counterfactual mugging, there is a clear point of view (before the mugging/coin flip) from where preference is clearly represented, via intermediary of future personal experience, as seen from that time, so we can at least shut up and calculate. That's not the issue I'm talking about. While for some approaches to decision-making, it might not matter whether we are talking about multiplicative indexical uncertainty, or additive counterfactuals, the issue here is the concept of personal identity through which a chunk of preference is represented in human mind. Decision theories can handle situations where personal identity doesn't make sense, but we'd still need to get preference about those situations from somewhere, and there is no clear assignment of it. Some questions about fine distinctions in preference aren't ever going to be answered by humans, we don't have the capacity to see the whole picture.
1Roko
Which brings up the question: suppose that your values are defined in terms of an ontology which is not merely false but actually logically inconsistent, though in a way that is too subtle for you to currently grasp. Is it rational to try to learn the logical truth, and thereby lose most or all of what you value? Should we try to hedge against such a possibility when designing a friendly AI? If so, how?
4Vladimir_Nesov
Do you want to lose what you value upon learning that you were confused? More realistically, the correct preference is to adapt the past preference to something that does make sense. More generally, if you should lose that aspect of preference, it means you prefer to do so; if you shouldn't, it means you don't prefer to do so. Whatever the case, doing what you prefer to do upon receiving new information is in accordance with what you prefer. This is all tautologous, but you are seeing a conflict of interest somewhere, so I don't think you've made the concepts involved in the situation explicit enough to recognize the tautologies. Preference talks about what you should do, and what you do is usually real (until you pass to a next level).
-2Roko
Perhaps an example will illustrate. The theist plans his life around doing God's will: when he is presented with a persuasive argument from scripture that God's will is for him to do X rather than Y, he will do X. Perhaps he has frequently adjusted his strategies when considering scripture, revelations (which are, in fact, hallucinations his subconscious generates), and Papal decree. It seems that he loses a lot upon learning that God does not exist. As a matter of pure psychological fact, he will be depressed (probably). Moreover, suppose that he holds beliefs that are mutually contradictory, but only in subtle ways; perhaps he thinks that God is in complete control of all things in the world, and that God is all-loving (all good), but the world which he thinks he lives in manifestly contains a lot of suffering. (The Theodicy Problem). It seems that the best thing for him is to remain ignorant of the paradox, and of his false, inconsistent and confused beliefs, and for events to transpire in a lucky way so that he never suffers serious material losses from his pathological decision-making. Consider the claim that what a Friendly AI should do for such a person is the following: keep them unaware of the facts, and optimize within their framework of reality.
3Vladimir_Nesov
This seems to confuse stuff that happens to a human with decision theory. What happens with a human (in human's thoughts, etc.) can't be "contradictory" apart from a specific interpretation that names some things "contradictory". This interpretation isn't fundamentally interesting for the purposes of optimizing the stuff. The ontology problem is asked about the FAI, not about a person that is optimized by FAI. For FAI, a person is just a pattern in the environment, just like any other object, with stars and people and paperclips all fundamentally alike; the only thing that distinguishes them for FAI is what preference tells should be done in each case. When we are talking about decision theory for FAI, especially while boxing the ontology inside the FAI, it's not obvious how to connect that with particular interpretations of what happens in environment, nor should we try, really. Now, speaking of people in environment, we might say that the theist is going to feel frustrated for some time upon realizing that they were confused for a long time. However I can't imagine the whole process of deconverting to be actually not preferable, as compared to remaining confused (especially given that in the long run, the person will need to grow up). Even the optimal strategy is going to have identifiable negative aspects, but it may only make the strategy suboptimal if there is a better way. Also, for a lot of obvious negative aspects, such as negative emotions accompanying an otherwise desirable transition, FAI is going to invent a way of avoiding that aspect, if that's desirable.
0Roko
And that the person might be the source of preference. This is fairly important. But, in any case, FAI theory is only here as an intuition pump for evaluating "what would the best thing be, according to this person's implicit preferences?" If it is possible to have preference-like things within a fundamentally contradictory belief system, and that's all the human in question has, then knowing about the inconsistency might be bad.
2Vladimir_Nesov
This is actually wrong. Whatever the AI starts with is its formal preference, it never changes, it never depends on anything. That this formal preference was actually intended to copycat an existing pattern in environment is a statement about what sorts of formal preference it is, but it is enacted the same way, in accordance with what should be done in that particular case based on what formal preference tells. Thus, what you've highlighted in the quote is a special case, not an additional feature. Also, I doubt it can work this way. True, but implicit preference is not something that person realizes to be preferable, and not something expressed in terms of confused "ontology" believed by that person. The implicit preference is a formal object that isn't built from fuzzy patterns interpreted in the person's thoughts. When you speak of "contradictions" in person't beliefs, you are speaking on a wrong level of abstraction, like if you were discussing parameters in a clustering algorithm as being relevant to reliable performance of hardware on which that algorithm runs. A belief system can't be "fundamentally contradictory" because it's not "fundamental" to begin with. What do you mean by "bad"? Bad according to what? It doesn't follow from confused thoughts that preference is somehow brittle.
-4Strange7
A Friendly AI might also resolve the situation by presenting itself as god, eliminating suffering in the world, and then giving out genuine revelations with adequately good advice.
0Roko
Eliminating the appearance of suffering in the world would probably be bad for such a theist. He spends much of his time running Church Bazaars to raise money for charity. Like many especially dedicated charity workers, he is somewhat emotionally and axiologically dependent upon the existence of the problem he is working against.
0Strange7
In that case, eliminate actual suffering as fast as possible, then rapidly reduce the appearance of suffering in ways calculated to make it seem like the theist's own actions are a significant factor, and eventually substitute some other productive activity.
2Vladimir_Nesov
To get back at this point: This depends on how we understand "values". Let's not conceptualize values is being defined in terms of an "ontology".
1wedrifid
You do not lose any options by gaining more knowledge. If the optimal response to have when your values are defined in terms of an inconsistent ontology is to go ahead and act as if the ontology is consistent then you can still choose to do so even once you find out the dark secret. You can only gain from knowing more. If your values are such that they do not even allow a mechanism for creating an best effort approximation of values in the case of ontological enlightenment then you are out of luck no matter what you do. Even if you explicitly value ignorance of the fact that nothing you value can have coherent value, the incoherency of your value system makes the ignorance value meaningless too. Make the most basic parts of the value system in an ontology that has as little chance as possible of being inconsistent. Reference to actual humans can ensure that a superintelligent FAI's value system will be logically consistent if it is in fact possible for a human to have a value system defined in a consistent ontology. If that is not possible then humans are in a hopeless position. But at least I (by definition) wouldn't care.
1Vladimir_Nesov
If preference is expressed in terms of what you should do, not what's true about the world, new observations never influence preference, so we can fix it at the start and never revise it (which is an important feature for constructing FAI, since you only ever have a hand in its initial construction). (To whoever downvoted this without comment -- it's not as stupid an idea as it might sound; what's true about the world doesn't matter for preference, but it does matter for decision-making, as decisions are made depending on what's observed. By isolating preference from influence of observations, we fix it at the start, but since it determines what should be done depending on all possible observations, we are not ignoring reality.)
0wedrifid
In the situation described by Roko the agent has doubt about its understanding of the very ontology that its values are expressed in. If it were an AI that would effectively mean that we designed it using mathematics that we thought was consistent but turns out to have a flaw. The FAI has self improved to a level where it has a suspicion that the ontology that is used to represent its value system is internally inconsistent and must decide whether to examine the problem further. (So we should have been able to fix it at the start but couldn't because we just weren't smart enough.)
0Vladimir_Nesov
If its values are not represented in terms of an "ontology", this won't happen.
0Roko
See the example of the theist (above). Do you really think that the best possible outcome for him involves knowing more?
2Vladimir_Nesov
How could it be otherwise? His confusion doesn't define his preference, and his preference doesn't set this particular form of confusion as being desirable. Maybe Wei Dai's post is a better way to communicate the distinction I'm making: A Master-Slave Model of Human Preferences (though it's different, the distinction is there as well).
0wedrifid
No, I think his values are defined in terms of a consistent ontology in which ignorance may result in a higher value outcome. If his values could not in fact be expresesd consistently then I do hold that (by definition) he doesn't lose by knowing more.
0Ghatanathoah
You might be able to get a scenario like this without mind-copying by using a variety of Newcomb's Problem. You wake up without any memories of the previous day. You then see Omega in front of you, holding two boxes. Omega explains that if you pick the first box, you will be tortured briefly now. If you pick the second box, you won't be. However, Omega informs you that he anticipated which box you would choose. If he predicted you'd pick the first box, the day before yesterday he drugged you so you'd sleep through the whole day. If he predicted you'd pick the second box he tortured you for a very long period of time the previous day and erased your memory of it afterward. He acknowledges that torture one doesn't remember afterwards isn't as bad as torture one does, and assures you that he knows this and extended the length of the previous day's torture to compensate. It seem to me like there'd be a strong temptation to pick the second box. However, your self from a few days ago would likely pay to be able to stop you from doing this.
0wedrifid
Is that an area in which a TDT would describe the appropriate response using different words to a UDT, even if they suggest the same action? I'm still trying to clarify the difference between UDT, TDT and my own understanding of DT. I would not describe the-updating-that-causes-the-value-changes as 'bayesian updating', rather 'naive updating'. (But this is a terminology preference.)
2Wei Dai
My understanding is that TDT would not press the button, just like it wouldn't give $100 to the counterfactual mugger.
0wedrifid
Thanks. So they actually do lead to different decisions? That is good to know... but puts me one step further away from confidence!
3Roko
I wish I could upvote twice as this is extremely important.
3Morendil
ISTM that someone who would one-box on Newcomb for the reasons given by Gary Drescher (act for the sake of what would be the case, even in the absence of causality) would press the button here; if you're the kind of person who wouldn't press the button, then prior to copying you would anticipate more pain than if you're the other kind. Getting the button is like getting the empty large box in the transparent boxes version of Newcomb's problem.
2Chris_Leong
This problem is effectively equivalent to counterfactual mugging. I don't know whether you should pay/press in this problem, but you should certainly pre-commit to doing this beforehand. Anyway, it doesn't prove that you value these copies intrinsically, just that you've agreed to a trade where you take into account their interests in return for them taking into account yours.

Are you convinced yet there is something wrong with this whole business of subjective anticipation?

I'm not sure what this "whole business of ... anticipation" has to do with subjective experience.

Suppose that, a la Jaynes, we programmed a robot with the rules of probability, the flexibility of recognizing various predicates about reality, and the means to apply the rules of probability when choosing between courses of action to maximize a utility function. Let's assume this utility function is implemented as an internal register H which is inc... (read more)

For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality... If that isn't what we mean, then we are presumably referring to a counterfactual world in which every atom is in exactly the same location as in the actual world. That means it is the same world. To claim there is or could be any difference is equivalent to claiming the existence of p-zombies.

I know p-zombies are unpopular around here, so maybe by 'equivalent' you merely meant 'equivalently wacky', but it's worth noting that th... (read more)

0bogus
I don't really understand this distinction. If property dualism is to explain subjective experience at all, the word 'me' must refer to a bundle of phenomenological properties associated to e.g. Richard Chappell's brain. Saying that 'I am now Britney Spears' would just mean that the same identifier now referred to a different bundle of phenomenal qualia. True, the physical and even mental features of the world would be unchanged, but it seems easy to model haeccetism just by adding a layer of indirection between your subjective experience and the actual bundle of qualia. And given that the possiblity of 'waking up as someone else' is somewhat intuitive, this might be worthwhile.

Upvoted, but the Boltzmann problem is that it casually looks like the vast majority of subpatterns that match a given description ARE Boltzmann Brains. After all, maxentropy is forever.

3rwallace
But so is eternal inflation, so we are comparing infinities of the same cardinality. The solution seems to be that the Kolmogorov complexity of a typical Boltzmann brain is high, because its space-time coordinates have a length in bits exceeding the length of the description of the brain itself; by Solomonoff induction, we can therefore assign them a very low measure, even in total.

After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.

The way subjective probability is usually modeled, there is this huge space of possibilities. And there is a measure defined over it. (I'm not a mathematician, so I may be using the wron... (read more)

0cousin_it
Interesting! So you propose to model mind copying by using probabilities greater than 1. I wonder how far we can push this idea and what difficulties may arise...

This is really well written. I hope you post more.

Again, probability doesn't work with indexical uncertainty.

5Nisan
Could you explain how the post you linked to relates to your comment?
0khafra
His comments point towards expanding the gp to something like

I wrote a response to this post here:

In order to solve this riddle, we only have to figure out what happens when you've been cloned twice and whether the answer to this should be 1/3 or 1/4. The first step is correct, the subjective probability of being the original should be 1/2 after you've pressed the cloning button once. However, after we've pressed the cloning button twice, in addition to the agent's who existed after that first button press, we now have an agent that falsely remembers existing at that point in time.
Distributing th
... (read more)

Can you win the lottery by methods such as "Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result"?

I would much rather make a billion clones of myself whenever I experience great sex with a highly desirable partner. Point: making the clones to experiencing the lottery is about the experience and not the lottery. I'm not sure I particularly want to have orgasmic-... (read more)

How I Learned to Stop Worrying and Love the Anthropic Trilemma

My impression was that the Anthropic Trilemma was Eliezer uncharacteristically confusing himself when reality itself didn't need to be.

After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no

... (read more)

Can I be sure I will not wake up as Britney Spears tomorrow?

Yes. For me to wake up as Britney Spears, would mean the atoms in her brain were rearranged to encode my memories and personality. The probability of this occurring is negligible.

It seems like you're discussing two types of copying procedure, or could be. The Ebborian copying seems to strongly imply the possibility of waking up as the copy or original (I have no statement on the probabilities), but a "teleporting Kirk" style of copying doesn't seem to imply this. You're presumably not... (read more)

My probability of ending up the original couldn't have been 0.5^99, that's effectively impossible, less than the probability of hallucinating this whole conversation.

Does anyone have a sense of what the lower limit is on meaningful probability estimates for individual anticipation? Right, like there should be some probability p(E) where, upon experiencing E, even a relatively sane and well-balanced person ought to predict that the actual state of the world is ~E, because p(I'm Crazy or I've Misunderstood) >> p(E).

More to the point, p(E) should be... (read more)

3jimrandomh
Careful; your question contains the implied assumption that P(hallucinate X) doesn't vary with X. For example, suppose I look at a string of 100 digits produced by a random number generator. Whatever that string is, my prior probability of it being that particular string was 10^-100, but no matter how long that string is, it doesn't mean I hallucinated it. What really matters is the ratio of how likely an event is to how likely it is that your brain would've hallucinated it, and that that depends more on your mental represenations than reality.
0Mass_Driver
I respectfully disagree. Suppose I bet that a 30-digit random number generator will deliver the number 938726493810487327500934872645. And, lo and behold, the generator comes up with 938726493810487327500934872645 on the first try. If I am magically certain that I am actually dealing with a random number generator, I ought to conclude that I am hallucinating, because p(me hallucinating) > p(guessing a 30-digit string correctly). Note that this is true even though p(me hallucinating the number 938726493810487327500934872645) is quite low. I am certainly more likely, for example, to hallucinate the number 123456789012345678901234567890 than I am to hallucinate the number 938726493810487327500934872645. But since I am trying to find the minimum meaningful probability, I don't care too much about the upper bounds on the odds that I'm hallucinating -- I want the lower bound on the odds that I'm hallucinating, and the lower bound would correspond to a mentally arbitrary number like 938726493810487327500934872645. In other words, if you agree with me that p(I correctly guess the number 938726493810487327500934872645) < p(I'm hallucinating the number 938726493810487327500934872645), then you should certainly agree with me that p(I correctly guess the number 123456789012345678901234567890) < p(I'm hallucinating the number 123456789012345678901234567890). The probability of guessing the correct number is always 10^-30; the probability of hallucinating varies, but I suspect that the probability of hallucinating is more than 10^-30 for either number.
5jimrandomh
Choosing a number and betting that you will see it increases the probability that you will wrongly believe that you have seen that number in the future to a value that does not depend on how long that number is. P(hallucinate number N|placed a bet on N) >> P(hallucinate number N).
3Mass_Driver
Yes, I completely agree. To show that I understand your point, I will suggest possible numbers for each of these variables. I would guess, with very low confidence, that on a daily basis, P(hallucinate a number) might be something like 10^-7, that P(hallucinate a 30-digit number N) might be something like 10^-37, and that P(hallucinate a 30-digit number N | placed a bet on N) might be something like 10^-9. Obviously, p(correctly guess a 30-digit number) is still 10^-30. Even given all of these values, I still claim that we should be interested in P(hallucinate a 30-digit number N | placed a bet on N). This number is probably roughly constant across ostensibly sane people, and I claim that it marks a lower bound below which we should not care about the difference in probabilities for a non-replicable event. I am not certain of these claims, and I would greatly appreciate your analysis of them.
2wnoise
Note that there are explanations other than "I correctly guessed", and "I'm hallucinating". "This generator is broken and always comes up 938726493810487327500934872645, but I've forgotten that it's broken consciously, but remember that number", or "The generator is really remotely controlled, and it has a microphone that heard me guess, and transmitted that to the controller, who wants to mess with my head."
3Mass_Driver
Oh, I completely agree. I'm using "hallucinating" as shorthand for all kinds of conspiracy theories, and assuming away the chance that the generator is broken. Obviously the first thing you should do if you appear to guess right is check the generator.
0RobinZ
By the way: Welcome to Less Wrong! If you want to post an introduction, you can do so in that thread.
0RobinZ
This seems related to the post I submitted in January, The Prediction Hierarchy. I think I'd have to know what you're using it for to know what to do with any given lower bound.

rwallace, nice reductio ad adsurdum of what I will call the Subjective Probability Anticipation Fallacy (SPAF). It is somewhat important because the SPAF seems much like, and may be the cause of, the Quantum Immortality Fallacy (QIF).

You are on the right track. What you are missing though is an account of how to deal properly with anthropic reasoning, probability, and decisions. For that see my paper on the 'Quantum Immortality' fallacy. I also explain it concisely on on my blog on Meaning of Probability in an MWI.

Basically, personal identity is not fu... (read more)

Interesting one. 100 hundred Joes, one 'original', some 'copies'.

If we copy Joe once, and let him be, he's 50% certainly original. If we copy the copy, Joe remains 50% certainly original, while status of copies does not change.

After the first copying process, we ended up with the original and a copy. 50% of the resulting sentient beings were original. If we do that again, again, we have two sentient beings, original and a new copy. Again, 50% chance for a random sentient byproduct of this copying process to be the original.

But there's something you didn't ... (read more)

1Jonii
New take. The problem can be described as a branching tree, where each copy-branch is cut off, leaving only 1 copy. So, at step 2, we would've had 4 possibilities, 1 original and three copies, but branches of the copy were cut away, so we are left with three Joes, 1 original, 1 equally likely copy, and... 1 copy that's twice as likely?

So let's look what happens in this process.

t=1: You know that you are the original t=2: We create a clone in such a way that you don't know whether you are a clone or not. At this time you have a subjective probability of 50% of being a clone. t=3: We tell clone 1 that they are a clone. Your subjective probability of being a clone is now 0% since you were not informed that you were a clone. t=4: We create another clone that provides you with a subjective probability of being a clone of 50% t=5: Clone 2 finds out that they are a clone. Since you weren't tol... (read more)

I don't really get what you're saying.

The normal way of looking at it is that you are only going to be you in the future. The better way of looking at it is that an unknown person is equally likely any person during any period of a given length.

The results of the former don't work well. They lead to people preferentially doing things to help their future selves, rather than helping others. This is rather silly. Future you isn't you either.

I don't believe in the Tegmark multiverse. ;)

I'm sorry, I didn't read the rest of your post after seeing the 0.5^99 estimate of the probability of being the original because the math looked very wrong to me, but I didn't know why. While I agree there is nothing controversial about saying that after one step you have a 50% chance of being the original, I'm pretty sure it is not true that you only have a 25% chance after two steps. Yes, if you are the original after step one, you have a 50% chance probability of still being the original after step two. So, I Oi is the probability of being the original ... (read more)

1rwallace
Hmm. Perhaps I should've put in a note to the effect of if you don't subscribe to the theory of subjective anticipation which would give that estimate, great, just skip to the summary break and read on from there.
0Nisan
If P(O1) is the probability of an event, then it doesn't change.

I believe in continuity of substance, not similarity of pattern, as the basis of identity. If you are the original, that is what you are for all time. You cannot wake up as the copy. At best, a new mind can be created with false beliefs (such as false memories, of experiences which did not happen to it). Do I still face a problem of "subjective anticipation"?

ETA: Eliezer said of the original problem, "If you can't do the merge without killing people, then the trilemma is dissolved." Under a criterion of physical continuity, you cannot ... (read more)

7wedrifid
So Scotty killed Kirk and then created a zombie-Kirk back on the Enterprise? It would seem that the whole Star Trek is a fantasy story about a space faring necromancer who repeatedly kills his crew then uses his evil contraption to reanimate new beings out of base matter while rampaging through space seeking new and exotic beings to join his never ending orgy of death.
3toto
Yes, yes he did, time and again (substituting "copy" for "zombie", as MP points out below). That's the Star Trek paradox. Imagine that there is a glitch in the system, so that the "original" Kirk fails to dematerialise when the "new" one appears, so we find ourselves with two copies of Kirk. Now Scotty says "Sowwy Captain" and zaps the "old" Kirk into a cloud of atoms. How in the world does that not constitute murder? That was not the paradox. The "paradox" is this: the only difference between "innocuous" teleportation, and the murder scenario described above, is a small time-shift of a few seconds. If Kirk1 disappears a few seconds before Kirk2 appears, we have no problem with that. We even show it repeatedly in programmes aimed at children. But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder. How is it that a mere shift of a few seconds causes such a great difference in our perception? How is it that we can immediately see the murder in the second case, but that the first case seems so innocent to us? This stark contrast between our intuitive perceptions of the two cases, despite their apparent underlying similarity, constitutes the paradox. And yes, it seems likely that the above also holds when a single person is made absolutely unconscious (flat EEG) and then awakened. Intuitively, we feel that the same person, the same identity, has persisted throughout this interruption; but when we think of the Star Trek paradox, and if we assume (as good materialists) that consciousness is the outcome of physical brain activity, we realise that this situation is not very different from that of Kirk1 and Kirk2. More generally, it illustrates the problems associated with assuming that you "are" the same person that you were just one minute ago (for some concepts of "are"). I was thinking of writing a post about this, but apparently all of the above seems to be ridiculously obvious to most LWers, so I g

But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder.

I'm not comfortable with 'for what it is, namely'. I would be comfortable with 'see the act as murder'. I don't play 'moral reference class tennis'. Killing a foetus before it is born is killing a foetus before it is born (or abortion). Creating a copy then removing the original is creating a copy and then removing the original (or teleportation). Killing someone who wants to die is killing someone who wants to die (or euthanasia). Calling any of these things murder is not necessarily wrong but it is not a factual judgement it is a moral judgement. The speaker wants people to have the same kind of reaction that they have to other acts that are called 'murder'.

'Murder' is just more complex than that. So is 'killing' and so is 'identity'. You can simplify the concepts arbitrarily so that 'identity' is a property of a specific combination of matter if you want to but that just means you need to make up a new word to describe "that thing that looks, talks and acts like the same Kirk every episode and doesn't care at all that he gets de-materialised all the t... (read more)

How in the world does that not constitute murder?

Any plans Kirk had prior to his "original" being dematerialized are still equally likely to be carried out by the "copy" Kirk, any preferences he had will still be defended, and so on. Nothing of consequence seems to have been lost; an observer unaware of this little drama will notice nothing different from what he would have predicted, had Kirk traveled by more conventional means.

To say that a murder has been committed seems like a strained interpretation of the facts. There's a difference between burning of the Library of Alexandria and destroying your hard drive when you have a backup.

Currently, murder and information-theoretic murder coincide, for the same reasons that death and information-theoretic death coincide. When that is no longer the case, the distinction will become more salient.

9wedrifid
And here is something that bugs me in Sci. Fi. shows. It's worse than 'Sound in space? Dammit!" Take Carter from Stargate. She has Asgard beaming technology and the Asgard core (computer). She can use this to create food, a Chelo for herself and Tritonin for Teal'c. The core function of the device is to take humanoid creatures and re-materialise them somewhere else. Why oh why do they not leave the originals behind and create a 50-Carter strong research team, a million strong Teal'c army and an entire wizard's circle of Daniel Jacksons with whatever his mind-power of the episode happens to be? There are dozens of ways to clone SG1. The robot-SGI is the mundane example. The Stargates themselves have the capability and so do Wraith darts. The same applies to Kirk and his crew. But no. let's just ignore the most obvious use of the core technology.
1khafra
If Kirk1 disappears a few seconds before Kirk2 appears, we assume that no subjective experience was lost; a branch of length 0 was terminated. If the transporter had predictive algorithms good enough to put Kirk2 into the exact same state that Kirk1 would be in a few seconds later, then painlessly dematerialized Kirk1, I would have no more problem with it than I do with the original Star Trek transporter.
0Mitchell_Porter
A copy, not a zombie.
4wedrifid
It is a shame that the term was reserved for 'philosophical zombies'. I mean, philosophical zombies haven't even been killed. Kirk was killed then reanimated. That's real necromancy for you.
4komponisto
Not possible, according to Eliezer.
1Mitchell_Porter
And what do you think? I disagree with Eliezer, and I can talk about my position, but I want to hear your opinion first.
2komponisto
I find Eliezer's argument convincing.
5Mitchell_Porter
OK. Well, here's a different perspective. Suppose we start with quantum mechanics. What is the argument that particles don't have identity? If you start with particles in positions A and B, and end with particles in positions C and D, and you want to calculate the probability amplitude for this transition, you count histories where A goes to C and B goes to D, and histories where A goes to D and B goes to C. Furthermore, these histories can interfere destructively (e.g. this happens with fermions), which implies that the two endpoints really are the same place in configuration space, and not just outcomes that look the same. From this it is concluded that the particles have no identity across time. According to this view, if you end up in the situation with particles at C and D, and ask if the particle at C started at A or started at B, there is simply no answer, because both types of history will have contributed to the outcome. However, it is a curious fact that although the evolving superposition contains histories of both types, within any individual history, there is identity across time! Within an individual history in the sum over histories, A does go to strictly one of C or D. Now I'm going to examine whether the idea of persistent particle-identity makes sense, first in single-world interpretations, then in many-world interpretations. What do physicists actually think is the reality of a quantum particle? If we put aside the systematic attempts to think about the problem, and just ask what attitudes are implicitly at work from day to day, I see three attitudes. One is the positivistic attitude that it is pointless to talk or think about things you can't observe. Another is the ignorance interpretation of quantum uncertainty; the particle always has definite properties, just like a classical particle, but it moves around randomly, in a way that adds up to quantum statistics. Finally, you have wavefunction realism: particles really are spread out in spac

I'll grant that by being sufficiently clever, you can probably reconcile quantum mechanics with whatever ontology you like. But the real question is: why bother? Why not take the Schroedinger equation literally? Physics has faced this kind of issue before -- think of the old episode about epicycles, for instance -- and the lesson seems clear enough to me. What's the difference here?

For what it's worth, I don't see the arbitrariness of collapse postulates and the arbitrariness of world-selection as symmetrical. It's not even clear to me that we need to worry about extracting "worlds" from blobs of amplitude, but to the extent we do, it seems basically like an issue of anthropic selection; whereas collapse postulates seem like invoking magic.

But in any case you don't really address the objection that

(e)verything is entangled with everything else, indirectly if not directly, and so all I could say is that the universe as a whole has identity across time.

Instead, you merely raise the issue of finding "individual worlds", and argue that if you can find manage to find an individual world, then you can say that that world has an identity that persists over time. Fair enough, but how does this help you rescue the idea that personal identity resides in "continuity of substance", when the latter may still be meaningless at the level of individual particles?

8Mitchell_Porter
The Schroedinger equation is an assertion about a thing called Psi. "Taking it literally" usually means "believe in many worlds". Now even if I decide to try this out, I face a multitude of questions. Am I to think of Psi as a wavefunction on a configuration space, or as a vector in a Hilbert space? Which part of Psi corresponds to the particular universe that I see? Am I to think of myself as a configuration of particles, a configuration of particles with an amplitude attached, a superposition of configurations each with its own amplitude, or maybe some other thing, like an object in Hilbert space (but what sort of object?) not preferentially associated with any particular basis? And then there's that little issue of deriving the Born probabilities! Once you decide to treat the wavefunction itself as ultimate physical reality, you must specify exactly which part of it corresponds to what we see, and you must explain where the probabilities come from. Otherwise you're not doing physics, you're just daydreaming. And when people do address these issues, they do so in divergent ways. And in my experience, when you do get down to specifics, problems arise, and the nature of the problems depends very much on which of those divergent implementations of many-worlds has been followed. It is hard to go any further unless you tell me more about what many-worlds means to you, and how you think it works. "Take the equation literally" is just a slogan and doesn't provide any details. By "world", do you mean a universe-sized configuration, or just an element of a more localized superposition? It is another of the exasperating ambiguities of many-worlds discourse. Some people do make it clear that their worlds-in-the-wavefunction are of cosmic size, while others apparently prefer to think of the multiplicity of realities as a local and even relative thing - I think this is what "many minds" is about: the observer is in a superposition and we acknowledge that there are many dist
7komponisto
What it means is that you let your ontology be dictated by the mathematical structure of the equation. So for instance: It's both -- even when regarded purely as a mathematical object. The set of wavefunctions on a configuration space is (the unit sphere of) a Hilbert space. Specifically, as I understand it, configuration space is a measure space of some sort, and the set of wavefunctions is (the unit sphere in) L^2 of that measure space. It seems to me that you're a region of configuration space. There's a subset of the measure space that consists of configurations that represent things like "you're in this state", "you're in that state", etc. We can call this subset the "you"-region. (Of course, these states also contain information about the rest of the universe, but the information they contain about you is the reason we're singling them out as a subset.) To repeat a point made before (possibly by Eliezer himself), this isn't an issue that distinguishes between many-worlds and collapse postulates. With many-worlds, you have to explain the Born probabilities; with collapse interpretations, you have to explain the mysterious collapse process. It seems to me far preferable, all else being equal, to be stuck with the former problem rather than the latter -- because it turns the mystery into an indexical issue ("Why are we in this branch rather than another?") rather than writing it into the laws of the universe. Why is this? Okay, it now occurs to me that I may have been confusing "continuity of substance" (your criterion) with "identity of substance" (which is what Eliezer's argument rules out). That's still more problematic, in my opinion, than a view that allows for uploading and teleportation, but in any event I withdraw the claim that it is challenged by Eliezer's quantum-mechanical argument about particle identity.
4Mitchell_Porter
There are two issues here: many worlds, and the alleged desirability or necessity of abandoning continuity of physical existence as a criterion of identity, whether physical or personal. Regarding many worlds, I will put it this way. There are several specific proposals out there claiming to derive the Born probabilities. Pick one, and I will tell you what's wrong with it. Without the probabilities, you are simply saying "all worlds exist, this is one of them, details to come". Regarding "continuity of substance" versus "identity of substance"... If I was seriously going to maintain the view I suggested - that encapsulated local entanglements permit a notion of persistence in time - then I would try to reconceptualize the physics so that identity of substance applied. What was formerly described as three entangled particles, I would want to describe as one thing with a big and evolving state.
0wedrifid
All this begs the question: Is personal identity made up of the same stuff as 'blue'?
2wnoise
Are there any actual predictions that would be different with "continuity of substance" as the standard for identity rather than "similarity of composition"? What does continuity of substance even mean with respect to Fock spaces or the path-integral formulation? All electrons (or protons, etc) are literally descended from all possible others.
1Mitchell_Porter
These "subjective anticipations" are different, because I don't try to think of my copies as me. Discussed here.

This 0.5^99 figure only appears if each copy bifurcates iteratively.

Rather than

1 becoming 2, becoming 3, becoming 4, ... becoming 100

We'd have

1 becoming 2, becoming 4, becoming 8, ... becoming 2^99

9wnoise
No, as described, you have probability (1/2)^n of becoming copy #n, and #99 and the original share (1/2)^99. The original is copied once -- giving 50% #0 and 50% #1. Then #0 is copied again, giving 25% #0, and 25% #2. Then #0 is copied again, giving 12.5% #0, and 12.5% #3, and so forth. This seems like a useful reductio ad absurdum of this means of calculating subjective expectation.
3rosyatrandom
Hmmm. Yes, I see it now. The dead-end copies function as traps, since they stop your participation in the game. As long as you can consciously differentiate your state as a copy or original, this works.