You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

entirelyuseless comments on [spoilers] EY's “A Girl Corrupted...?!” new story is an allegorical study of quantum immortality? - Less Wrong Discussion

5 Post author: Algernoq 19 February 2016 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 20 February 2016 06:45:05AM *  1 point [-]

Okay, so to go into more detail:

The naive version I mean goes something like "In the future, the universe will have amplitude spread across a lot of states. But I only exist to care in a few of those states. So it's okay to make decisions that maximize my expected-conditional-on-existing utility." This is the one that's basically evidential decision theory - it makes the mistake (where "mistake" is meant according to what I think are ordinary human norms of good decision-making) of conditioning on something that hasn't actually happened when making decisions. Just like an evidential decision theory agent will happily bribe the newspaper to report good news (because certain newspaper articles are correlated with good outcomes), a naive QI agent will happily pay assassins to kill it if it has a below-average day.

The second version I was thinking of (and I'm probably failing a turing test here) goes something like "But that almost-ordinary calculation of expected value is not what I meant - the amplitude of quantum states shouldn't be interpreted as probability at all. They all exist simultaneously at each time step. This is why I have equal probability - actual probability deriving from uncertainty - of being alive no matter how much amplitude I occupy. Instead, I choose to calculate expected value by some complicated function that merely looks a whole lot like naive quantum immortality, driven by this intuition that I'm still alive so long as the amplitude of that event is nonzero."

Again, there is no counterargument that goes "no, this way of choosing actions is wrong according to the external True Source Of Good Judgment." But it sure as heck seems like quantum amplitudes do have something to do with probability - they show up if you try to encode or predict your observations with small turing machines, for example.

Comment author: entirelyuseless 20 February 2016 04:50:10PM 2 points [-]

How much amplitude is non-negligible? It seems like the amplitude that you have now is probably already negligible: in the vast majority of the multiverse, you do not exist or are already dead. So it doesn't seem to make much sense to base expected value calculations on the amount of amplitude left.

Comment author: Viliam 22 February 2016 09:06:54AM *  1 point [-]

I'd say that you should not care about how much amplitude you have now (because there's nothing you can do about it now), only about how much of that will you maintain in the future. The reason would be roughly that this is the amplitude-maximization algorithm.

Yeah, compared with the whole universe (or multiverse) even your best is already pretty close to zero. But there's nothing you can do about it. You should only care about things you can change. (Of course once in a while you should check whether your ideas about "what you can change" correspond to reality.)

Similarly to how you shouldn't buy lottery tickets because it's not worth doing... however, if you find yourself in a situation where you somehow got the winning ticket (because you bought it anyway, or someone gave it to you, doesn't matter), you should try to spend the money wisely. The chance of winning the lottery is small if it didn't happen yet, but huge if you already are inside. You shouldn't throw the money away just because "the chances of this happening were small anyway". Your existence here and now is an example of an unlikely ticket that won anyway.

Intuitively, if you imagine the Everett branches, you should imagine yourself as a programmer of millions of tiny copies of you living in the future. Each copy should do the best they can, ignoring the other copies. But if there is something you can do now to increase the average happiness of the copies, you should do it, even if it makes some copies worse. That's the paradox -- you (now) are allowed to harm some copies, but no copy is allowed to harm itself. For example, by not buying the lottery ticket you are doing great harm to the copy living in the future where your "lucky numbers" won. That's okay because in return million other copies got an extra dollar to spend. But if you buy the ticket anyway, the lucky copy is required to maximize the benefits they get from the winnings.

Same for "quantum immortality". If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000, good for you, enjoy the future (assuming it is enjoyable, which is far from certain). But the today-you should not make plans that include killing most of the future copies just because they didn't win some kind of lottery.

Comment author: qmotus 22 February 2016 10:49:36AM 1 point [-]

But the today-you should not make plans that include killing most of the future copies just because they didn't win some kind of lottery.

I don't think the "killing most of your future copies" scenarios are very interesting here. I have presented a few scenarios that I think are somewhat more relevant elsewhere in this thread.

In any case, I'm not sure I'm buying the amplitude-maximization thing. Supposedly there's an infinite number of copies of me that live around 80 more years at most; so most of the amplitude is in Everett branches where that happens. Then there are some copies, with a much smaller amplitude (but again there should be an infinite number of them), who will live forever. If I'm just maximising utility, why wouldn't it make sense to sacrifice all other copies so that the ones you will live forever will have at least a decent life? How can we make any utility calculations like that?

If you find yourself in a situation that thanks to some unlikely miracle you are alive in the year 3000

"If". The way I see it, the point of QI is that, given some relatively uncontroversial assumptions (MWI or some other infinite universe scenario is true and consciousness is a purely physical thing), it's inevitable.

Comment author: gjm 22 February 2016 01:48:39PM 0 points [-]

Then there are some copies [...] who will live forever.

The ones who actually live for ever may have infinitesimal measure, in which case even with no discount rate an infinite change in their net utility needn't outweigh everything else.

I will make a stronger claim: they almost certainly do have infinitesimal measure. If there is a nonzero lower bound on Pr(death) in any given fixed length of time, then Pr(alive after n years) decreases exponentially with n, and Pr(alive for ever) is zero.

Comment author: qmotus 25 February 2016 12:50:12PM 0 points [-]

What if we consider not just the probability of not dying, but of, say, dying and being resurrected by someone in the far future as well? In general, the probability that for a state of mind at time t, there exists a state of mind at time t+1, so that from a subjective point of view there is no discontinuity. I find it hard to see how the probability of that could ever be strictly zero, even though what you say kind of makes sense.

Comment author: gjm 25 February 2016 01:05:50PM 0 points [-]

If there is any sequence of events with nonzero probability (more precisely: whose probability of happening in a given period of time never falls below some fixed positive value) that causes the irrecoverable loss of a given mind-state, then with probability 1 any given mind-state will not persist literally for ever.

(It might reappear, Boltzmann-brain-style, by sheer good luck. In some random place and at some random time. It will usually then rapidly die because it's been instantiated in some situation where none of what's required to keep it around is present. In a large enough universe this will happen extremely often -- though equally often what will reappear is a mind-state similar to, but subtly different from, the original; there is nothing to make this process prefer mind-states that have actually existed before. I would not consider this to be "living for ever".)

Comment author: qmotus 26 February 2016 09:01:32AM *  0 points [-]

I would not consider this to be "living for ever"

Maybe not. But let's suppose there was no "real world" at all, only a huge number of Boltzmann brains, some of which, from a subjective point of view, look like continuations of each other. If for every brain state there is a new spontaneously appearing and disappearing brain somewhere that feels like the "next state", wouldn't this give a subjective feeling of immortality, and wouldn't it be impossible for us to tell the difference between this situation and the "real world"?

In fact, I think our current theories of physics suggest this to be the case, but since it leads to the Boltzmann brain paradox, maybe it actually demonstrates a major flaw instead. I suppose similar problems apply to some other hypothetical situations, like nested simulations.

Comment author: Manfred 20 February 2016 08:44:39PM *  0 points [-]

Is this feedback that I should update my model of the second sort of people? I'll take it as such, and edit the post above.