Eliezer’s Anthropic Trilemma:

So here’s a simple algorithm for winning the lottery: Buy a ticket.  Suspend your computer program just before the lottery drawing – which should of course be a quantum lottery, so that every ticket wins somewhere.  Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result.  If you don’t win the lottery, then just wake up automatically. The odds of winning the lottery are ordinarily a billion to one.  But now the branch in which you win has your “measure”, your “amount of experience”, temporarily multiplied by a trillion.  So with the brief expenditure of a little extra computing power, you can subjectively win the lottery – be reasonably sure that when next you open your eyes, you will see a computer screen flashing “You won!”  As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn’t feel a thing.

See the original post for assumptions, what merging minds entails etc. He proposes three alternative bullets to bite: accepting that this would work, denying that there is “any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears” so undermining any question about what you should anticipate, and Nick Bostrom’s response, paraphrased by Eliezer:

…you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds. To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities.  If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.  And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1.  But we don’t have p(“experience win after 15s”) = p(“experience win after 15s”|”experience win after 5s”)*p(“experience win after 5s”) + p(“experience win after 15s”|”experience not-win after 5s”)*p(“experience not-win after 5s”).

I think I already bit the bullet about there not being a meaningful sense in which I won’t wake up as Britney Spears. However I would like to offer a better, relatively bullet biting free solution.

First notice that you will have to bite Bostrom’s bullet if you even accept Eliezer’s premise that arranging to multiply your ‘amount of experience’ in one branch in the future makes you more likely to experience that branch. Call this principle ‘follow-the-crowd’ (FTC). And let’s give the name ‘blatantly obvious principle’ (BOP) to the notion that P(I win at time 2) is equal to P(I win at time 2|I win at time 1)P(I win at time 1)+P(I win at time 2|I lose at time 1)P(I lose at time 1). Bostrom’s bullet is to deny BOP.

We can set aside the bit about merging brains together for now; that isn’t causing our problem. Consider a simpler and smaller (for the sake of easy diagramming) lottery setup where after you win or lose you are woken for ten seconds as a single person, then put back to sleep and woken as four copies in the winning branch or one in the losing branch. See the diagram below. You are at Time 0 (T0). Before Time 1 (T1) the lottery is run, so at T1 the winner is W1 and the loser is L1. W1 is then copied to give the multitude of winning experiences at T2, while L2 remains single.

Now using the same reasoning as you would to win the lottery before, FTC, you should anticipate an 80% chance of winning the lottery at T2. There is four times as much of your experience winning the lottery as not then. But BOP says you still only have a fifty percent chance of being a lottery winner at T2:

P(win at T2) = P(win at T2|win at T1).P(win at T1)+P(win at T2|lose at T1).P(lose at T1) = 1 x 1/2 + 0 x 1/2 = 1/2

FTC and BOP conflict. If you accept that you should generally anticipate futures where there are more of you more strongly, it looks like you accept that P(a) does not always equal P(a|b)P(b)+P(a|-b)P(-b). How sad.

Looking at the diagram above, it is easy to see why these two methods of calculating anticipations disagree.  There are two times in the diagram that your future branches, once in a probabilistic event and once in being copied. FTC and BOP both treat the probabilistic event the same: they divide your expectations between the outcomes according to their objective probability. At the other branching the two principles do different things. BOP treats it the same as a probabilistic event, dividing your expectation of reaching that point between the many branches you could continue on. FTC treats it as a multiplication of your experience, giving each new branch the full measure of the incoming branch. Which method is correct?

Neither. FTC and BOP are both approximations of better principles. Both of the better principles are probably true, and they do not conflict.

To see this, first we should be precise about what we mean by ‘anticipate’. There is more than one resolution to the conflict, depending on your theory of what to anticipate: where the purported thread of personal experience goes, if anywhere. (Nope, resolving the trilemma does not seem to answer this question).

Resolution 1: the single thread

The most natural assumption seems to be that your future takes one branch at every intersection. It does this based on objective probability at probabilistic events, or equiprobably at copying events. It follows BOP. This means we can keep the present version of BOP, so I shall explain how we can do without FTC.

Consider diagram 2. If your future takes one branch at every intersection, and you happen to win the lottery, there are still many T2 lottery winners who will not be your future. They are your copies, but they are not where your thread of experience goes. They and your real future self can’t distinguish who is actually in your future, but there is some truth of the matter. It is shown in green.

Diagram 2

Now while there are only two objective possible worlds, when we consider possible paths for the green thread there are five possible worlds (one shown in diagram 2). In each one your experience follows a different path up the tree. Since your future is now distinguished from other similar experiences, we can see the weight of your experience at Tin a world where you win is no greater than the weight in a world where you lose, though there are always more copies who are not you in the world where you win.

The four worlds where your future is in a winning branch are each only a quarter as likely as one where you lose, because there is a fifty percent chance of you reaching W1, and after that a twenty five percent chance of reaching a given W2. By the original FTC reasoning then, you are equally likely to win or lose. More copies just makes you less certain exactly where it will be.

I am treating the invisible green thread like any other hidden characteristic. Suppose you know that you are and will continue to be the person with the red underpants, though many copies will be made of you with green underpants. However many extra copies are made, a world with more of them in future should not get more of your credence, even if you don’t know which future person actually has the red pants. If you think of yourself as having only one future, then you can’t also consider there to be a greater amount of your experience when there are a lot of copies. If you did anticipate experiences based on the probability that many people other than you were scheduled for that experience, you would greatly increase the minuscule credence you have in experiencing being Britney Spears when you wake up tomorrow.

Doesn’t this conflict with the use of FTC to avoid the Bolzmann brain problem, Eliezer’s original motivation for accepting it? No. The above reasoning means there is a difference between where you should anticipate going when you are at T0, and where you should think you are if you are at T2.

If you are at T0 you should anticipate a 50% chance of winning, but if you are at T2 you have an 80% chance of being a winner. Sound silly? That’s because you’ve forgotten that you are potentially talking about different people. If you are at T2, you are probably not the future of the person who was at T0, and you have no way to tell. You are a copy of them, but their future thread is unlikely to wend through you. If you knew that you were their future, then you would agree with their calculations.

That is, anyone who only knows they are at T2 should consider themselves likely to have won, because there are many more winners than losers. Anyone who knows they are at T2 and are your future, should give even odds to winning. At T0, you know that the future person whose measure you are interested in is at T2 and is your future, so you also give even odds to winning.

Avoiding the Bolzmann brain problem requires a principle similar to FTC which says you are presently more likely to be in a world where there are more people like you. SIA says just that for instance, and there are other anthropic principles that imply similar things. Avoiding the Bolzmann brain problem does not require inferring from this that your future lies in worlds where there are more such people. And such an inference is invalid.

This is exactly the same as how it is invalid to infer that you will have many children from the fact that you are more likely to be from a family with many children. Probability theory doesn’t distinguish between the relationship between you and your children and the relationship between you and your future selves.

Resolution 2

You could instead consider all copies to be your futures. Your thread is duplicated when you are. In that case you should treat the two kinds of branching differently, unlike BOP, but still not in the way FTC does. It appears you should anticipate a 50% chance of becoming four people, rather than an 80% chance of becoming one of those people. There is no sense in which you will become one of the winners rather than another. Like in the last case, it is true that if you are presently one of the copies in the future, you should think yourself 80% likely to be a winner. But again ‘you’ refers to a different entity in this case to the one it referred to before the lottery. It refers to a single future copy. It can’t usefully refer to a whole set of winners, because the one considering it does not know if they are part of that set or if they are a loser. As in the last case, your anticipations at T0 should be different from your expectations for yourself if you know only that you are in the future already.

In this case BOP gives us the right answer for the anticipated chances of winning at T0. However it says you have a 25% chance of becoming each winner at T2 given you win at T1, instead of 100% chance of becoming all of them.

Resolution 3:

Suppose that you want to equate becoming four people in one branch as being more likely to be there. More of your future weight is there, so for some notions of expectation perhaps you expect to be there. You take ‘what is the probability that I win the lottery at T1?’ to mean something like ‘what proportion of my future selves are winning at T1?’. FTC gives the correct answer to this question – you aren’t especially likely to win at T1, but you probably will at T2. Or in the original problem, you should expect to win after 5 seconds and lose after 15 seconds, as Nick Bostrom suggested. If FTC is true, then we must scrap BOP. This is easier than it looks because BOP is not what it seems.

Here is BOP again:

P(I win at T2) is equal to P(I win at T2|I win at T1)P(I win at T1)+P(I win at T2|I lose at T1)P(I lose at T1)

It looks like a simple application of

P(a) = P(a|b)P(b)+P(a|-b)P(-b)

But here is a more extended version:

P(win at 15|at 15) = P(win an 15|at 15 and came from win at 5)P(win at 5|at 5)+P(win at 15|at 15 and came from loss at 5)P(lose at 5|at 5)

This is only equal to BOP if the probability of having a win at 5 in your past when you are at 15 is equal to the probability of winning at 5 when you are at 5. To accept FTC is to deny that. FTC says you are more likely to find the win in your past than to experience it because many copies are descended from the same past. So accepting FTC doesn’t conflict with P(a) being equal to P(a|b)P(b)+P(a|-b)P(-b), it just makes BOP an inaccurate application of this true principle.

In summary:

1. If your future is (by definitional choice or underlying reality) a continuous non-splitting thread, then something like SIA should be used instead of FTC, and BOP holds. Who you anticipate being differs from who you should think you are when you get there. Who you should think you are when you get there remains as something like SIA and avoids the Bolzmann brain problem.

2. If all your future copies are equally your future, you should anticipate becoming a large number of people with the same probability as that you would have become one person if there were no extra copies. In which case FTC does not hold, because you expect to become many people with a small probability instead of one of those many people with a large probability. BOP holds in a modified form where it doesn’t treat being copied as being sent down a random path. But if you want to know what a random moment from your future will hold, a random moment from T1 is more likely to include losing than a random moment from T2. For working out what a random T2 moment will hold, BOP is a false application of a correct principle.

3. If for whatever reason you conceptualise yourself as being more likely to go into future worlds based on the number of copies of you there are in those worlds, then FTC does hold, but BOP becomes false.

I think the most important point is that the question of where you should anticipate going need not have the same answer as where a future copy of you should expect to be (if they don’t know for some reason). A future copy who doesn’t know where they are should think they are more likely to be in world where there are many people like themselves, but you should not necessarily think you are likely to go into such a world. If you don’t think you are as likely to go into such a world, then FTC doesn’t hold. If you do, then BOP doesn’t hold.

It seems to me the original problem uses FTC while assuming there will be a single thread, thereby making BOP look inevitable. If the thread is kept, FTC should not be, which can be conceptualised as in either of resolutions 1 or 2. If FTC is kept, BOP need not be, as in resolution 3. Whether you keep FTC or BOP will give you different expectations about the future, but which expectations are warranted is a question for another time.


New Comment
50 comments, sorted by Click to highlight new comments since:

I wrote in 2001:

Anthropic reasoning can't exist apart from a decision theory, otherwise there is no constraint on what reasoning process you can use. You might as well believe anything if it has no effect on your actions.

Think of it as an extension of Eliezer's "make beliefs pay rent in anticipated experiences". I think beliefs should pay rent in decision making.

Katja, I'm not sure if this is something that has persuasive power for you, but it's an idea that has brought a lot of clarity to me regarding anthropic reasoning and has led to the UDT approach to anthropics, which several other LWers also seem to find promising. I believe anthropic reasoning is a specialization of yours, but you have mostly stayed out of the UDT-anthropics discussions. May I ask why?

Are you saying that beliefs should be constrained by having to give rise to the decisions that seem reasonable, or should be constrained by having to give rise to some decisions at all?

You could start with the latter to begin with. But surely if you have a candidate anthropic reasoning theory, but the only way to fit it into a decision theory produces unreasonable decisions, you'd want to keep looking?

I agree it is troubling if your beliefs don't have any consequences for conceivable decisions, though as far as I know, not fatal.

This alone doesn't seem reason to study anthropics alongside decision theory any more than it is to study biology alongside decision theory. Most of how to make decisions is agreed upon, so it is rare that a belief would only have any consequences under a certain decision theory. There may be other reasons to consider them together however.

As far as the second type of constraint - choosing theories based on their consequences - I don't know why I would expect my intuitions about which decisions I should make to be that reliable relative to my knowledge and intuitions about information theory, probability theory, logic etc. It seems I'm much more motivated to make certain actions than to have correct abstract beliefs about probability (I'd be more wary of abstract beliefs about traditionally emotive topic such as love or gods). If I had a candidate theory which suggested 'unreasonable decisions', I would probably keep looking, but this is mostly because I am (embarrassingly) motivated to justify certain decisions, not because of the small amount of evidence that my intuitions could give me on a topic they are probably not honed for.

I'm not sure why you think there are no constraints on beliefs unless they are paired with a decision theory. Could you elaborate? e.g. why is Bayesian conditionalization not a constraint on the set of beliefs you hold? Could you give me an example of a legitimate constraint?

I haven't participated in UDT-anthropics discussions because working on my current projects seems more productive than looking into all of the research others are doing on topics which may prove useful. If you think this warrants more attention though, I'm listening - what are the most important implications of getting the right decision theory, other than building super-AIs?

I don't want to spend too much time trying to convince you, since I think people should mostly follow their own instincts (if they have strong instincts) when choosing what research directions to pursue. I was mainly curious if you had already looked into UDT and found it wanting for some reason. But I'll try to answer your questions.

why is Bayesian conditionalization not a constraint on the set of beliefs you hold?

What justifies Bayesian conditionalization? Is Bayesian conditionalization so obviously correct that it should be considered an axiom?

It turns out that Bayesian updating is appropriate only under certain conditions (which in particular are not satisfied in situations with indexical uncertainty), but this is not easy to see except in the context of decision theory. See Why (and why not) Bayesian Updating?

what are the most important implications of getting the right decision theory, other than building super-AIs?

I've already mentioned that I found it productive to consider anthropic reasoning from a decision theoretic perspective (btw, that, not super-AIs, was in fact my original motivation for studying decision theory). So I'm not quite sure what you're asking here...

The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.

Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn't give you the results you want. Which doesn't say much about Bayesian conditionalization. Am I missing something?

"So I'm not quite sure what you're asking here..."

It is possible for things to be more important than an unquantified increase in productivity on anthropics. I'm also curious whether you think it has other implications.

I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.

That being:

Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.

Bayesian conditionalization can be derived from Dutch book arguments, which are (hypothetical) decisions...

"I'm not sure why you think there are no constraints on beliefs unless they are paired with a decision theory."

Because any change to your probability theory can be undone by a change in decision theory, resulting in the same behaviour in the end. The behaviour is where everything pays rent, so its the combination that matters :-)

I used to have the same viewpoint as your 2001 quote, but I think I'm giving it up. CDT,EDT, and TDT theorists agree that a coin flip is 50-50, so probability in general doesn't seem to be too dependent on decision theory.

I still agree that when you're confused, retreating to decisions helps. It can help you decide that it's okay to walk in the garage with the invisible dragon, and that it's okay for your friends to head out on a space expedition beyond the cosmological horizon. Once you've decided this, however, ideas like "there is no dragon" and "my friends still exist" kinda drop out of the analysis, and you can have your (non)existing invisibles back.

In the case of indexical probabilities, it's less obvious what it even means to say "I am this copy", but I don't think it's nonsense. I changed my mind when JGWeissman mentioned that all of the situations where you decide to say "1/2" in the sleeping beauty problem are one's where you have precisely enough evidence to shift your prior from 2/3 to 1/2.

Just because it all adds up to normalcy doesn't mean that it is irrelevant.

It is more elegant to just have a decision theory than to have a decision theory and a rule for updating, and it deals better with corner cases.

[-]FAWS60

I realize that this seems to be a common view, but I can't even begin to imagine how intelligent rational people who have given the matter some thought can possibly think that they are going to be one and only one particular future self even when other future selves exist. If your future selves A and B will both exist, what could possibly be the difference between being only future self A and being only future self B? No one seems to imagine a silver thread or a magical pixie dust trail connecting their present self and a particular future self. Is this supposed to be one of those mysterious "first person facts"? How? Your current self and all of your future selves have exactly the same experiences in either case, unless you expect something to break that symmetry. What would that be?

but I can't even begin to imagine how intelligent rational people who have given the matter some thought can possibly think that [...]

Please don't express disapproval with incomprehension, it's an unhealthy epistemic practice.

[-]FAWS110

I'm expressing incomprehension, not disapproval. I'm genuinely puzzled. If I were trying to express disapproval I would have phrased the pixie dust sentence with "It's like they think there is a ... or ... or something."

Am I not allowed to use the the phrase "can't even begin to imagine" even when I spent the rest of the comment trying to imagine and failing utterly?

In my experience with colloquial English--mostly in the Pacific Northwest, Hawaii, and Florida--"I can't even begin to imagine [justification for other person's behavior or belief]" expresses disapproval. Specifically, it expresses the belief that the behavior or belief is not justified, and that the person under discussion is either disingenuous or working with an unusually flawed epistemic strategy.

The phrase may have different connotations for you, but if I were trying to express incomprehension safely I'd choose a different phrase. Maybe something like "Although seemingly intelligent, rational people disagree, anticipating being one and only one particular future self when other future selves exist sounds incoherent."

[-]FAWS20

I don't see how calling it incoherent is any less disapproving. And I expect the problem is not the particular phase I used, but the level of incomprehension expressed, otherwise you would have suggested some actual expression of incomprehension, wouldn't you? What I want to express is that I have no explanation for the presence of the apparent belief and unsuccessfully searched for possible justifications without finding anything that looked promising at any stage. Calling it incoherent would imply that I expect a psychological explanation instead of a justification to be the cause of the belief. I don't have that expectation.

I meant for my suggestion to denotationally express the same thing as what you said. But the connotations of "I can't even begin to imagine..." are different from the connotations of "...sounds incoherent."

In contexts more familiar to you, "...sounds incoherent" may express more disapproval; but in the colloquial English I'm familiar with, it's the more neutral phrase.

This is one of those things that everyone knows explicitly but is probably still worth a post with lots of good examples for new recruits. (Unless it's already been covered.)

In general I don't think the sequences covered a lot of failure modes caused by feeling morally or status-ly justified or self-righteous.

Can't remember the name off the top of my head, but think Eliezer has this one covered.

I agree probably nothing sets apart particular copies as your future. But it shouldn't matter to questions like this. You should be able to conceptualise an arbitrary set of things as 'you' or as whatever you want to call it, even where there is no useful physical distinction to be made, and still expect probability theory to work.

and still expect probability theory to work.

Probability theory works on whatever probability spaces you define. This fact doesn't justify any particular specification of a probability space.

If your future selves A and B will both exist, what could possibly be the difference between being only future self A and being only future self B?

I know that I am me and you are not me, because (for example) when I want to pick up the pencil on the table between us my arm moves and yours doesn't.

Future self A can perform that experiment to determine that he is not also future self B. B can also do that with respect to A.

Thus, the anticipation of being both future self A and future self B does not correlate to any particular experience anyone is going to have.

[-]FAWS20

I think you misunderstand me. Future you A is future you A, and not future you B, and likewise future you B is not future you A. No particular you will ever experience being A and B at the same time. Future you A and B both remember being current you, but not being each other. I completely agree with all that. What I don't understand is how either A or B is supposed to be you in the sense of being the same person as current you while the other is not?

I'm not claiming that A and B have to consider each other to be the same person. That would be a possibility, but they/you could also treat being the same person as non-transitive so each is the same person as current you (C), but they aren't the same person to each other, or A, B and C could consider themselves three different persons. The only thing that doesn't make sense to me is C going on to be either A or B, as determined by random chance (??? where would that randomness be happening?) or ... something? I don't even sufficiently understand how this is supposed to work to properly describe it.

What I don't understand is how either A or B is supposed to be you in the sense of being the same person as current you while the other is not?

A will probably call A the real you, and B will probably call B the real you. Other people might find them both the same as the current you, but might take sides on the labeling issue later if A or B does something they like or don't like. It'd surely be most useful to call both A and B "the same person as current you" in the beginning, at least, because they'd both be extremely similar to the current you. A might change more than B as time goes on, leading some to prefer identifying B as the "real" you (possibly right away, to dissipate the weirdness of it all), but it's all a matter of preference in labels. After all, even now the you that is reading this post is not the same as the you of 5 minutes ago. English simply isn't well-equipped to deal with the situation where a person can have multiple future selves (at least, not yet).

Good point. Fictional example:

William T. Riker was copied and his copy (hereafter "Thomas Riker") was abandoned all alone on a planet. Will Riker had friends and a career history since the copy, so for convenience Thomas Riker took the name "Thomas" and continued his old career where he left off.

I agree, but as you allow, your (future) specific identity amongst identical copies matters very much when symmetry is broken, e.g. one copy is to be tortured and the rest pleasured. It matters to me (my experience in the moment) even with some inevitable future destructive merge in all my copies' future, just as it matters to me what I experience now in spite of the expectation of my permanent death within 100 years.

[-]FAWS50

I agree, but as you allow, your (future) specific identity amongst identical copies matters very much when symmetry is broken, e.g. one copy is to be tortured and the rest pleasured.

I'm not sure I understand you. Obviously it matters to your future self A whether A is tortured or pleasured. And also to your current self whether there is a future self A that will be tortured. Do you think that, given that your future self A is tortured and your future self B pleasured, there is an additional fact as to whether you will be tortured or pleasured? I don't. And I don't see the relevance of the rest of your post to my point either.

If I see myselves at different points of time as being in collusion as to how to make all of us better off, which has been a viewpoint I've seen taken recently, then there is some agreement between a set of sufficiently-similar agents.

I could view the terms of that agreement as "me" and then the question becomes "what do the terms of the agreement that different sufficiently-similar instances of me serve under say about this situation."

In which case "I" want to come up with a way of deciding, for example, how much pleasure I require per unit of torture, etc. But certainly the question "Am I being tortured or pleasured" doesn't exactly carry over.

I thought I disagreed with you but then I showed my work and it turns out I agree with you.

If I see myselves at different points of time as being in collusion as to how to make all of us better off, which has been a viewpoint I've seen taken recently, then there is some agreement between a set of sufficiently-similar agents.

If this is too easy, a way to make it more fun is to do the same thing but with parts of you and coalitions of parts of you, gene/meme-eye view of evolution style. Thinking about whether there's an important metaphysical or decision theoretic sense in which an algorithm is 'yours' or 'mine' from this perspective, while seeing if it continues to add up to normality, can lead to more fun still. And if that's still not fun enough you can get really good at the kinds of meditation that supposedly let you intuitively grok all of this nonsense and notice the subtleties from the inside! Maybe. :D

If you're searching for how I disagree with you, I don't (I thought I made that clear with "as you allow"). At first you were talking about a perfectly symmetrical situation; I was interested in a less symmetrical one.

Is there an additional fact? No, and at first I was tempted to think that it matters how if there's a continuous experience vs. a discontinuity where a scanned copy is woken up, (i.e. the original isn't destroyed, so I might care more, as the original, about what fate lies in store for it). But I think that difference doesn't even matter to me, assuming perfect copying, of course.

To indulge in another shift: maybe I'll create slave copies of myself one day. I certainly won't be allocating an even share of my resources to them (they'll hate me for it) :)

Do you think that, given that your future self A is tortured and your future self B pleasured, there is an additional fact as to whether you will be tortured or pleasured?

Indeed, there is that additional fact.

You personally don't know that fact; it is stipulated that until the crucial future divergence, the two selves' experiences are identical (to them). That's part of the setup. But Omega, who is looking on, sees two physically separate bodies, and he can label them A and B and keep track of which is which before their experiences begin to diverge.

The information about who is who exists, as part of the state of the universe. We only ignore this because we're performing thought experiments. But suppose you really ran this experiment. Suppose that the identical experiences of the two selves involved talking to the experimenters, running through two physically distinct but identical copies of the same conversation. Then, if you didn't think the experimenters were infallible, you might try to trick them during the conversation, to break the symmetry and reveal to you whether you were the copy who were going to be tortured.

Of course the setup stipulates that the symmetry can't be broken. But that's an idealization. Because of this, I think that while the results of the thought experiment are internally consistent, they may not fully apply to any physically realizable situation.

[-]FAWS10

I'm not sure I understand you. Do you think that right now there exist an infinite number of physically separate bodies that collectively make up current you and all have identical experiences (including this conversation), that exactly one of those bodies is the real (?) you, and that every distinct possible future you can be traced back to one particular body or a distinct subset of those physical bodies? If so what is your basis for believing that? And is this true for any possible mechanism that could produce copies, or do you refuse to acknowledge copies that are produced in a way that doesn't preserve this quality as possible yous?

Do you think that right now there exist an infinite number of physically separate bodies that collectively make up current you

No, I don't think it's normally the case (I presume you're referring to quantum branches?).

But the Anthropic Trilemma scenario we're discussing explicitly postulates this: that physically separate but identical copies of my body will be constructed, have identical experiences for a while, and then diverge.

If this scenario is actually implemented, then it will be necessarily true that (given complete knowledge of the universe) every distinct future body can be traced back to a past body at a point before the experiences diverge.

This would not be true, though, in Eliezer's other thought experiment about persons implemented as 2D circuits that can be split along their thickness.

But in the scenario here, if there are two bodies and one is going to be tortured tomorrow, there can be a fact of the matter today about which one it is, even if the bodies themselves don't know it. Even though their presents are identical (to the limit that they can experience), the fact that their futures are going to be different can make it necessary to talk about two separate persons existing all along.

[-]FAWS10

That's not what I was talking about though, I was talking about the perspective of the root. Obviously once you already are in a branch there is a fact as to whether you are in that branch, even if you can't tell yourself.

But the Anthropic Trilemma scenario we're discussing explicitly postulates this: that physically separate but identical copies of my body will be constructed, have identical experiences for a while, and then diverge.

Not as far as I can tell. The first experience after being copied is either winning the lottery or not.

I really like this consideration of anthropics. I have seen a lot of magic discussion of "you" in anthropics and this post stays clear of that and resolves problems that have bothered me, clearly and with pictures.

Thank you.

As I said in the posts surrounding this one, I think we should be asking what the correct decision is, not what the probabilities mean. "Subjective anticipation" is something evolved to deal with standard human behaviour up until now, so we need not expect it to behave properly in copying/deleting cases.

So how to make this into a decision theory? Well, there are several ways, and they each give you a different answer. To get resolution 3, assume that each person in worlds W2 and L2 is to be approached with a bet on which world they are in, with the winnings going to a mutually acceptable charity. Then they would bet at 4:1 odds, as multiple wins in W1 add up.

To get resolution 2, have the same bet offered twice: at W1/L1 and W2/L2. At W1/L1, you would take even odds (since there is a single winning) and at W2/L2 you would take 4:1 odds.

Resolution 1 seems to be similar, but I can't say, since the problem is incomplete: how altruistic your copies are to one another, for instance, is a crucial component in deciding the right behaviour under these conditions (would one copy take a deal that give it $1 while taking $1000 from each of the other copies?). If the copies are not mutually altruistic, then this leads to a preference reversal (I would not want a future copy to take that deal) which someone can exploit for free money.

But the probabilities don't mean anything, absent a decision theory (that deals with multiple copies making the same decisions) and a utility. Arguing about them does seem to me like arguing about the sound of trees falling in the forest.

This seems sort of similar to the famous "Quantum suicide" concept, although less elegant :-)

We can view Time 2 as the result of an experiment where 8 copies of you start the game, are assigned to two groups W and L (4 copies in each), then a coin is tossed, and if it comes up heads, than we kill 75% of the L group. After such an experiment, you definitely should expect yourself being in the W group given that you're alive with a 80% probability. Time 3 (not shown in your figure) corresponds to a second round of the experiment, when no coin is tossed, but simply 75% of the W group are exterminated. So if you're doing a single suicide, expect to be in W with 80% probability, but if you're doing a double-suicide experiment, then expect a 50% probability of being in W. If before exterminating the W group you dump all their memories into the one surviving copy, then expect a 50% chance of being in L and a 50% chance of being in W, but with four times as much memories of having been in W after the first stage of the experiment. Finally, you can also easily calculate all probabilities if you find yourself in group W after the first suicide, but are unsure as to which version of the game you're playing. I think that's what Bostrom's answer corresponds to.

The suicide experiment is designed more clearly, and it helps here. What would change if you had an exactly 100% probability of winning (the original Quantum suicide)? What if vice versa? And if you have a non-zero probability of either outcome, what if you just view it as the appropriately weighted sum of the two extremes?

Like most philosophical paradoxes the error is hidden in innocuous seeming premises. In this case it is the assumption that there is any substantive notion of individual identity (see ship of theseus for good reasons why not).

We get a cleaner theory and avoid these problems if we just stipulate that there are experiences. Certain experiences happen to be similar to others that occured in the near past and others don't. Period end of story.

So sure you could wake up a trillion copies and have them experience lottery happiness but why bother with the lottery, modify their code so they feel the most intense euphoria possible and keep running those sims.

Like most philosophical paradoxes the error is hidden in innocuous seeming premises. In this case it is the assumption that there is any substantive notion of individual identity (see ship of theseus for good reasons why not).

We get a cleaner theory and avoid these problems if we just stipulate that there are experiences. Certain experiences happen to be similar to others that occured in the near past and others don't. Period end of story.

So sure you could wake up a trillion copies and have them experience lottery happiness but why bother with the lottery, modify their code so they feel the most intense euphoria possible and keep running those sims.

[This comment is no longer endorsed by its author]Reply

Like most philosophical paradoxes the error is hidden in innocuous seeming premises. In this case it is the assumption that there is any substantive notion of individual identity (see ship of theseus for good reasons why not).

We get a cleaner theory and avoid these problems if we just stipulate that there are experiences. Certain experiences happen to be similar to others that occured in the near past and others don't. Period end of story.

So sure you could wake up a trillion copies and have them experience lottery happiness but why bother with the lottery, modify their code so they feel the most intense euphoria possible and keep running those sims.

[This comment is no longer endorsed by its author]Reply

Like most philosophical paradoxes the error is hidden in innocuous seeming premises. In this case it is the assumption that there is any substantive notion of individual identity (see ship of theseus for good reasons why not).

We get a cleaner theory and avoid these problems if we just stipulate that there are experiences. Certain experiences happen to be similar to others that occured in the near past and others don't. Period end of story.

So sure you could wake up a trillion copies and have them experience lottery happiness but why bother with the lottery, modify their code so they feel the most intense euphoria possible and keep running those sims.

[This comment is no longer endorsed by its author]Reply

Let me see if I can fit this to my picture of reality.

Randomly picking one of the 5 versions of yourself at T2 yields a winner with 80% probability. But this does not correspond to any real state of knowledge. Our actual degree of belief in winning the lottery depends on our knowledge of the "thread" or "threads" leading there from our current position. Obviously all these threads share an event of small probability. It shouldn't matter how many times they split after that, they're just splitting a sliver of probability mass.

More generally, picking 'a random thread' in a way that agrees with what we experience would involve looking at a separate 'choice' for each branching point. Figuring out what 'you' should expect to see, and with what probability, means finding in turn the probability of randomly picking each thread (or each that involves consciousness like 'yours') at each branching point, and multiplying those fractional probabilities for each individual thread.

Defining branching points in the case of the Boltzmann brain scenario seems trickier to me. But I do think this approach would work if I understood the setup better. Eliezer's argument should still hold.

I think I'm hung up on the lottery example in Eliezer's original post - what is meant by a quantum lottery? He said 'every ticket wins somewhere' - does that mean that every ticket wins in some future timeline (such that if you could split yourself and populate multiple future timelines, you could increase your probability of winning)? If not, what does it mean? Lacking some special provision for the ticket, the outcome is determined by the ticket you bought before you queued up the split, rather than the individual probability of winning.

If anyone could clarify this, I'd be grateful.

What is meant is n tickets are sold, and then you do something like 'each ticket bought a particular plutonium atom out of n atoms; whichever atom decays first is a winning ticket'. Many Worlds says that each atom decays first in some world-line.

The idea is to, like the least convenient possible world, to avoid a cheap rhetorical escape like 'I deny the trilemma because in my world the possibility of winning has already been foreclosed by my buying a predetermined losing ticket! Hah!'

Ahh, that makes sense. Thank you.

The problem is easier to decide with a small change that also makes it more practical. Suppose two competing laboratories design a machine intelligence and bid for a government contract to produce it. The government will evaluate the prototypes and choose one of them for mass-production (the "winner", getting multiplied); due to the R&D effort involved, the company who fails the bid will go into receivership, and the machine intelligence not chosen will be auctioned off, but never reproduced (the "loser").

The question is: should the developers anticipate mass-production? Should they instruct the machine intelligence to expect mass-production?

Assuming that after the evaluation process, both machine intelligences are turned off, to be turned on again after either mass-production or the auction has occurred, should the machine intelligence expect to be the original, or a copy?

The obvious answer: the developers will rationally both expect mass-production, and teach their machines to expect it, because of the machine intelligences that exist after this process, most will operate under the correct assumption, and only one will need to be taught that this assumption was wrong. The machine ought to expect to be a "winner".

The basic problem with the trilemma is that given the setup the "you won" message is a lie.

There are a large number of copies with equal claim to the prize and nearly all of them are about to be consigned to oblivion, and have thus not won a damm thing - The odds of winning the prize have not shifted an iota, instead what you have done is add on a extra possible outcome to the act of playing the lottery, that of spending ten seconds going "oh fuck, Im going to die" then dying. If creating more identical copies of me carries extra anticipatory freight, then ignoring the deletion of said copies is inconsistent.

This, is however a really good argument for why making copies of persons should be forbidden. Actually, more than just forbidden, it should be anathema. Given a world in which making and running simultanious copies of yourself is permitted, then the highly improbable future timelines in which you end up turning the entire universe into computronium running copies of you for all eternity would make up the wast bulk of your subjective future experience. And everyone should anticipate likewise becoming the abomination that destroyed the universe.This is .. Undesirable. As in "Copying people will, in all likelyhood, carry the penalty of summary execution".

...though there are always more copies who are not you in the world where you win.

What does this mean?

Under the assumption that you become only one of your future copies, at most, there will be others you do not become.

This seems right, but I'm not confident I understand what's meant by "P(I win at T2)". I assume "I" is sampled out of the diagrammed entities (L2 and W2.1 ... W2.N) existing at T2. With SIA or FTC, this is presumed to be uniform (even though the single L2 entity comes from one of two equiprobable universe-branches, and the N W2_N={W2.1...W2.N} all exist in the other). Is there some interpretation to "P('I' win at T2)" other than this (i.e. P("I" win at T2)=1-P("I" is L2)=(under FTC)1-1/N) ?

The correct application of partitioning (that is, P(a)=P(a|b)P(b)+P(a|-b)P(-b)) to P('I' win at T2) would be

  • P('I' win at T2)=P('I' is in {W2.1,...,W2.N})=P('I' is not L2|'I' came from W1)P('I' came from W1)+P('I' is L2|'I' came from L1)P('I' came from L1).

(since 'I' came from L1 is the negation of 'I' came from W1)

Note that this doesn't save us from choosing what distribution to sample 'I' (at T2) from - we can still use SIA/FTC.

BOP is definitely punning "I" (which turns out to give a correct answer only under the particular rule for assigning to "I" where P("I" is L2)=1/2).