You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on The map of quantum (big world) immortality - Less Wrong Discussion

2 Post author: turchin 25 January 2016 10:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 28 January 2016 08:57:35PM 0 points [-]

But if you hold "you X" to be true merely because someone who feels like they're you does X, without regard for how plentiful those someones are across the multiverse (or perhaps just that part of it that can be considered the future of the-you-I'm-talking-to, or something) then you're going to have trouble preferring a 1% chance of death (or pain or poverty or whatever) to a 99% chance. I think this indicates that that's a bad way to use the language.

Comment author: qmotus 28 January 2016 09:19:27PM 0 points [-]

I'm not sure I entirely get what you're saying; but basically, yes, I can see trouble there.

But I think that, at its core, the point of QI is just to say that given MWI, conscious observers should expect to subjectively exist forever, and in that it differs from our normal intuition which is that without extra effort like signing up for cryonics, we should be pretty certain that we'll die at some point and no longer exist after that. I'm not sure that all this talk about identity exactly hits the mark, although it's relevant in the sense that I'm hopeful that somebody manages to show me why QI isn't as bad as it seems to be.

Comment author: gjm 28 January 2016 10:03:14PM 0 points [-]

QI or no QI, we should believe the following two things.

  1. In every outcome I will ever get to experience, I will still be alive.

  2. In the vast majority of outcomes 200 years from now (assuming no big medical breakthroughs etc.), measured in any terms that aren't defined by my experiences, I will be dead.

What QI mostly seems to add to this is some (questionable) definitions of words like "you", and really not much else.

Comment author: entirelyuseless 29 January 2016 02:21:41PM *  1 point [-]

I agree with qmotus that something is being added, not so much by QI, as by the many worlds interpretation. There is certainly a difference between "there will be only one outcome" and "all possible outcomes will happen."

If we think all possible outcomes will happen, and if you assume that "200 years from now, I will still be alive," is a possible outcome, it follows from your #1 that I will experience being alive 200 years from now. This isn't a question of how we define "I" - it is true on any definition, given that the premises use the same definition. (This is not to deny that I will also be dead -- that follows as well.)

If only one possible outcome will happen, then very likely 200 years from now, I will not experience being alive.

So if QI adds anything to MWI, it would be that "200 years from now, I will still be alive," and the like, are possible outcomes.

Comment author: gjm 29 January 2016 04:58:56PM 0 points [-]

There is certainly a difference between "there will be only one outcome" and "all possible outcomes will happen"

There's no observable difference between them. In particular, "happen" here has to include "happen on branches inaccessible to us", which means that a lot of the intuitions we've developed for how we should feel about something "happening" or not "happening" need to be treated with extreme caution.

If we think [...] it follows from your #1 that I will experience being alive 200 years from now. This isn't a question of how we define "I" - it is true on any definition

OK. But the plausibility -- even on MWI -- of (1) "all possible outcomes will happen" plus (2) "it is possible that 200 years from now, I will still be alive" depends on either an unusual meaning for "will happen" or an unusual meaning for "I" (or of course both).

Maybe the right way to put it is this. MWI turns "ordinary" uncertainty (not knowing how the world is or will be) into indexical uncertainty (not knowing where in the world "I" will be). If you accept MWI, then you can take something like "X will happen" to mean "I will be in a branch where X happens" (in which case you're only entitled to say it when X happens on all branches, or at least a good enough approximation to that) or to mean "there will be a branch where X happens" (in which case you shouldn't feel about that in the same way as you feel about things definitely happening in the usual sense).

So: yes, on some branch I will experience being alive 200 years from now; this indeed follows from MWI. But to go from there to saying flatly "I will experience being alive 200 years from now" you need to be using "I will ..." locutions in a very nonstandard manner. If your employer asks "Will you embezzle all our money?" and your intentions are honest, you will probably not answer "yes" even though presumably there's some very low-measure portion of the multiverse where for some reason you set out to do so and succeed.

Whether that nonstandard usage is a matter of redefining "I" (so it applies equally to every possible continuation of present-you, however low its measure) or "will" (so it applies equally to every possible future, however low its measure) is up to you. But as soon as you say "I will experience being alive 200 years from now" you are speaking a different language from the one you speak when you say "I will not embezzle all your money". The latter is still a useful thing to be able to say, and I suggest that it's better not to redefine our language so that "I will" stops being usable to distinguish large-measure futures from tiny-measure futures.

if QI adds anything to MWI, it would be that [...] are possible outcomes.

Unless they were already possible outcomes without MWI, they are not possible outcomes with MWI (whether QI or no QI).

What MWI adds is that in a particular sense they are not merely possible outcomes but certain outcomes. But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by "in 200 years I will still be alive".

Comment author: qmotus 30 January 2016 10:44:44AM *  1 point [-]

You raise a valid point, which makes me think that our language may simply be inadequate to describe living in many worlds. Because both "yes" and "no" seem to me to be valid answers to the question "will you embezzle all our money".

I still don't think that it refutes QI, though. Take an observer at some moment: looking towards the future and ignoring the branches where they don't exist, they will see that every branch will lead to them living to be infinitely old; but every branch doesn't lead to them embezzling their employer's money.

But note that the thing that MWI makes (so far as we know) a certain outcome is not what we normally express by "in 200 years I will still be alive".

Do you mean that it's not certain because of the identity considerations presented, or that MWI doesn't even say that it's necessarily true in some branch?

Comment author: gjm 30 January 2016 11:46:40AM 0 points [-]

I still don't think that it refutes QI, though.

I don't think refuting is what QI needs. It is, actually, true (on MWI) that despite the train rushing towards you while you're tied to the tracks, or your multiply-metastatic inoperable cancer, or whatever other horrors, there are teeny-tiny bits of wavefunction (and hence of reality) in which you somehow survive those horrors.

What QI says that isn't just restating MWI is as much a matter of attitude to that fact as anything else.

I wasn't claiming that QI and inevitable embezzlement are exactly analogous; the former involves an anthropic(ish) element absent from the latter.

Do you mean that it's not certain because of the identity considerations presented, or that MWI doesn't even say that it's necessarily true in some branch?

The "so far as we know" was because of the possibility that there are catastrophes MWI gives you no way to survive (though I think that can only be true in so far as QM-as-presently-understood is incomplete or incorrect). The "not what we normally express by ..." was because of what I'd been saying in the rest of my comment.

Comment author: qmotus 30 January 2016 12:09:28PM 1 point [-]

I see. But I fail to understand, then, how this is uninteresting, as you said in your original comment. Let's say you find yourself on those rain tracks: what do you expect to happen, then? What if a family member or other important person comes to see you for (what they believe to be) a final time? Do you simply say goodbye to them, fully aware that from your point of view, it won't be a final time? What if we repeat this for a hundred times in a row?

Comment author: gjm 30 January 2016 03:20:14PM 1 point [-]

what do you expect to happen, then?

I have the following expectations in that situation:

  • In most possible futures, I will soon die. Of course I won't experience that (though I will experience some of the process), but other people will find that the world goes on without me in it.
  • Therefore, most of my possible trajectories from here end very soon, in death.
  • In a tiny minority of possible futures, I somehow survive. The train stops more abruptly than I thought possible, or gets derailed before hitting me. My cancer abruptly and bizarrely goes into complete remission. Or, more oddly but not necessarily more improbably: I get most of the way towards death but something stops me partway. The train rips my limbs off and somehow my head and torso get flung away from the tracks, and someone finds me before I lose too much blood. The cancer gets most of the way towards killing me, at which point some eccentric billionaire decides to bribe everyone involved to get my head frozen, and it turns out that cryonics works better than I expect it to. Etc.

I suspect you will want to say something like: "OK, very good, but what do you expect to experience?" but I think I have told you everything there is to say. I expect that a week from now (in our hypothetical about-to-die situation) all that remains of "my" measure will be in situations where I had an extraordinary narrow escape from death. That doesn't seem to me like enough reason to say, e.g., that "I expect to survive".

Do you simply say goodbye to them [...]?

Of course. From my present point of view it almost certainly will be a final time. From the point of view of those ridiculously lucky versions of me that somehow survive it won't be, but that's no different from the fact that (MWI or no, QI or no) I might somehow survive anyway.

If we repeat this several times in a row, then actually my update isn't so much in the direction of QI (which I think has zero actual factual content; it's just a matter of definitions and attitudes, ) as in the direction of weird theories in which someone or something is deliberately keeping me alive. Because if I have just had ten successive one-in-a-billion-billion escapes, hypotheses like "there is a god after all, and for some reason it has plans that involve my survival" start to be less improbable than "I just got repeatedly and outrageously lucky".

Comment author: turchin 30 January 2016 09:49:24PM 2 points [-]

I think that this attitude to QI is wrong because the measure should be renormilized if the number of the observers change.

We can't count the worlds where I do not exist as worlds that influence my measure (or if we do, we have to add all other worlds where I do not exist, which are infinite and so my chances to exist in any next moment are almost zero).

The number of "me" will not change in case of embezzle. But If I die in some branches, it changes. It may be a little bit foggy in case of quantum immortality, but if we use many world immortality it may be clear.

For example a million copies of the program tries to calculate something inside actual computer. The goal system of the program is that it should calculate, say, pi with 10 digits accuracy. But it knows that most copies of the program will be killed soon, before it will able to finish calculation. Should it stop, knowing that it will be killed in next moment and with overwhelming probability? No, because if it stops, all its other copies stop too. So it must behave as it will survive.

My point is that from decision theory point of view rational agent should behave as if QI works, and plan his action or expectation accordingly. It also should expect that all his future experiences will be supportive to QI.

I will try to construct more clear example: For example, I have to survive many rounds of russian rouletts with chances of survival 1 in 10 each. The only thing I could change about it is following: after each round I will be asked if I believe in QI and will be punished by electroshock if I say "NO". If I say "YES", I will be punished twice in this round, but never again in any round.

If agent believe in QI it is rational to him to say "YES" in the beginning, get two shocks and never get it again. If he "believes in measure", than it will be rational to him to say NO, get one punishment in the beginning, and 0,1 punishment in next round, 0.01 punishment in third and so on, with total 1.111, which is smaller than 2.

My point here is that after several rounds most people (if they will be such agents) will change their decision and will say Yes.

In case of your example with train it means that it will be rational to you to use part of your time not for speaking with relatives, but for planning your actions after you survive in most probable way (train derails).

Comment author: qmotus 31 January 2016 09:23:36AM 1 point [-]

I suspect you will want to say something like: "OK, very good, but what do you expect to experience?" but I think I have told you everything there is to say.

I'm tempted to, but I guess you have tried to explain your position as well as you can. I see you what you are trying to say, but I still find it quite incomprehensible how that attitude can be adopted in practice. On the other hand, I feel like it (or somehow getting rid of the idea of continuity of consciousness, as Yvain has suggested, which I have no idea how to do) is quite essential for not being as anxious and horrified about quantum/big world immortality as I am.

Comment author: entirelyuseless 31 January 2016 05:32:02PM 0 points [-]

But unless you are already absolutely certain of your position in this discussion, you should also update toward, "I was mistaken and QI has factual content and is more likely to be true than I thought it was."

Comment author: qmotus 28 January 2016 10:16:04PM 1 point [-]

I would say that QI (actually, MWI) adds a third thing, which is that "I will experience every outcome where I'm alive", but it seems that I'm not able to communicate my points very effectively here.

Comment author: gjm 29 January 2016 02:15:19PM 0 points [-]

How does MWI do that? On the face of it, MWI says nothing about experience, so how do you get that third thing from MWI? (I think you'll need to do it by adding questionable word definitions, assumptions about personal identity, etc. But I'm willing to be shown I'm wrong!)

Comment author: qmotus 29 January 2016 04:57:16PM 0 points [-]

I think this post by entirelyuseless answers your question quite well, so if you're still puzzled by this, we can continue there. Also, I don't see how QI depends on any additional weird assumptions. After all, you're using the word "experience" in your list of two points without defining it exactly. I don't see why it's necessary to define it either: a conscious experience is most likely simply a computational thing with a physical basis, and MWI and these other big world scenarios essentially say that all physical states (that are not prohibited by the laws of physics) happen somewhere.

Comment author: gjm 29 January 2016 06:09:21PM 0 points [-]

As you can see, I've replied at some length to entirelyuseless's comment.