# False vacuum: the universe playing quantum suicide

14 09 January 2013 05:04PM

Imagine that the universe is approximately as it appears to be (I know, this is a controversial proposition, but bear with me!). Further imagine that the many worlds interpretation of Quantum mechanics is true (I'm really moving out of Less Wrong's comfort zone here, aren't I?).

Now assume that our universe is in a situation of false vacuum - the universe is not in its lowest energy configuration. Somewhere, at some point, our universe may tunnel into true vacuum, resulting in a expanding bubble of destruction that will eat the entire universe at high speed, destroying all matter and life. In many worlds, such a collapse need not be terminal: life could go one on a branch of lower measure. In fact, anthropically, life will go on somewhere, no matter how unstable the false vacuum is.

So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second. We only exist because we're in the branch of measure a trillionth of a trillionth of a trillionth of... all the way back to the Big Bang.

None of these assumptions make any difference to what we'd expect to see observationally: only a good enough theory can say that they're right or wrong. You may notice that this setup transforms the whole universe into a quantum suicide situation.

The question is, how do you go about maximising expected utility in this situation? I can think of a few different approaches:

1. Gnaw on the bullet: take the quantum measure as a probability. This means that you now have a discount factor of a trillion every second. You have to rush out and get/do all the good stuff as fast as possible: a delay of a second costs you a reduction in utility of a trillion. If you are a negative utilitarian, you also have to rush to minimise the bad stuff, but you can also take comfort in the fact that the potential for negative utility across the universe is going down fast.
2. Use relative measures: care about the relative proportion of good worlds versus bad worlds, while assigning zero to those worlds where the vacuum has collapsed. This requires a natural zero to make sense, and can be seen as quite arbitrary: what would you do about entangled worlds, or about the non-zero probability that the vacuum-collapsed worlds may have worthwhile life in them? Would the relative measure user also put zero value to worlds that were empty of life for other reasons than vacuum collapse? For instance, would they  be in favour of programming an AI's friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?
3. Deny the measure: construct a meta ethical theory where only classical probabilities (or classical uncertainties) count as probabilities. Quantum measures do not: you care about the sum total of all branches of the universe. Universes in which the photon went through the top slit, went through the bottom slit, or was in an entangled state that went through both slits... to you, there are three completely separate universes, and you can assign totally unrelated utilities to each one. This seems quite arbitrary, though: how are you going to construct these preferences across the whole of the quantum universe, when forged your current preferences on a single branch?
4. Cheat: note that nothing in life is certain. Even if we have the strongest evidence imaginable about vacuum collapse, there's always a tiny chance that the evidence is wrong. After a few seconds, that probability will be dwarfed by the discount factor of the collapsing universe. So go about your business as usual, knowing that most of the measure/probability mass remains in the non-collapsing universe. This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards? Also, would you take seemingly stupid bets, like bets at a trillion trillion trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?

Sort By: Best
Comment author: 09 January 2013 06:05:20PM 16 points [-]

Comment author: 09 January 2013 06:26:25PM 20 points [-]

Sadly, it turns out that ciphergoth is not present to observe any configuration where anthropics doesn't hurt ciphergoth's head. Sorry.

Comment author: 09 January 2013 06:23:02PM 10 points [-]

None of these assumptions make any difference to what we'd expect to see observationally:

Shouldn't I expect to live in a young universe? I would expect that scientists would soon uncover evidence that the universe is much younger than they previously believed, barely young enough so that observers such as myself had enough time to come into existence.

Comment author: 09 January 2013 06:31:48PM *  4 points [-]

Shouldn't I expect to live in a young universe?

If you treat quantum measure as probability, yes. If not... no.

Suppose I told you: I've just pressed a button that possibly reduces your measure by a half. Do you conclude that the button is likely to have failed?

Comment author: 09 January 2013 06:45:24PM 3 points [-]

Reducing by just a half might not be enough. But for enough of a reduction, yes.

Comment author: 11 January 2013 05:31:40AM 0 points [-]

It seems like there are different kinds of measure involved here. Assuming that quantum measure determines which entity we find ourselves instantiated in (alternatively, "who we are born as") seems distinct, and potentially less defensible than, assuming that quantum measure should determine how we assign future expectations.

Comment author: 09 January 2013 10:37:47PM *  5 points [-]

Perhaps a little reframing is in order. Let's go with shminux's suggestion. You're locked in a room with a cake and a vial of deadly gas. The vial is quantumly unstable, and has a 1/2 measure of breaking every minute. Do you eat the cake now, or later, if you know you'd enjoy it a little more if you ate it in 10 minutes?

"Now" corresponds to not thinking you have quantum immortality
"Later" corresponds to thinking you have quantum immortality

Thee reason I think a reframing like this is better is because it doesn't by construction have a negligible probability if you choose #1 - humans are bad at hypotheticals, particularly when the hypothetical is nearly impossible.

Comment author: 10 January 2013 01:29:44AM *  0 points [-]

I still pick "later", despite not thinking I have quantum immortality.

As in, my behavior in this scenario won't differ depending on whether my fate hangs on quantum vials or classical coin flips.

Of course, when I say "me" I mean the utility function describing myself that I outlined in the other comment in this thread... In real life I can't really imagine eating cake at a moment like that. I'll try to come up with a more realistic dilemma later to see if my gut instinct matches my utility function model....

Comment author: 10 January 2013 01:44:20AM 1 point [-]

I still pick "later", despite not thinking I have quantum immortality

I dunno, your choice is walking in a duck-like manner. Of course, intuition pumps are tricky, and we're usually not consistent with respect to them. For example, if I asked you to lock yourself in that room for ten dollars per minute, you'd refuse. Perhaps some conceptual weirdness stems from the fact that in the thought experiment, there's no point at which you're "let out to find out if you're dead or not."

Comment author: 10 January 2013 01:57:42AM *  1 point [-]

There may well be an inconsistency, but that particular example doesn't seem to exploit it yet...

U=aPresent+bFutureExpectations

agree, die

agree, live

U2=present+b*[alive and \$10 richer]

refuse

U3=present+b*[Expected Future with me Alive]

U3 > [.5U2 + .5U1] do not take the deal,

With the cake, however, ExpectedValueFromFutureCake = Null in the case that I am dead, which renders the entire utility function irrelevant. (within the system of the other comment)

Eat cake now (dead or alive) Ux = CakeRightNow + 0

Eat cake later Uy = 0 + ExpectedValueFromFutureCake

Die without eating cake Uz = 0 + null, therefore irrelevant

Ux < Uy so do not eat the cake

What I didn't mention before - as I've outlined it, this utility function won't ever get to eat the cake, since the expected future value is always greater. So there's that flaw. I'm not sure whether this signals that the utility function is silly, or that the cake is silly...maybe both.

However, my utility function is only silly in that you can't even eat the cake before near certain death - I'm guessing your model would have you eat the cake as soon as the probability of your death crossed a certain threshold. But if you were immortal and the cake was just sitting in front of you in all its ever-increasing utility, when would you eat it? The cake will generate a paradox - you always expect more from the cake in the future, yet you will never eat the cake (and once you realize this, your expectations from the cake should drop down to zero - which means you might as well eat it now, but if you wait just a bit longer...)

I think the cake breaks everything and we aught to not use it.

Comment author: 09 January 2013 07:35:15PM 7 points [-]

Let me just note that you don't need anything exotic like false vacuum for a setup like that, your garden-variety radioactive decay is no different. Your problem is equivalent to being a Schrodinger's cat.

Comment author: 10 January 2013 05:42:49AM 3 points [-]

Radioactive decay could happen without killing you. False vacuum is all-or-nothing.

Comment author: 10 January 2013 05:39:40PM 1 point [-]

Not if

Your problem is equivalent to being a Schrodinger's cat.

Comment author: 14 January 2013 05:02:34AM 2 points [-]

Let's postulate an additional law of physics that says any branch of the wavefunction that tunnels into true vacuum is dropped and the rest is renormalized to measure 1. The complexity penalty of this additional law seems low enough that we'd expect to be in this kind of universe pretty quickly (if we had evidence indicating highly unstable false vacuum). This is sort of covered by #4, I guess, so I'll answer the questions given there.

This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards?

I don't see why that would happen, since the universe has already existed for billions of years. Wouldn't the transition either have happened long ago, or be so smooth that the probabilities are essentially constant within human timeframes?

Also, would you take seemingly stupid bets, like bets at a trillion trillion trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?

I don't think the law of physics postulated above would provide any evidence that you can bet on.

Comment author: 14 January 2013 12:13:58PM 0 points [-]

I don't see why that would happen, since the universe has already existed for billions of years. Wouldn't the transition either have happened long ago, or be so smooth that the probabilities are essentially constant within human timeframes?

Yes, realistically. You'd have to have long term horizons, or odd circumstance, to get that kind of behaviour in practice.

I don't think the law of physics postulated above would provide any evidence that you can bet on.

I'm not sure - see some of the suggestions by others in this thread. In any case, we can trivially imagine a situation where there is relevant evidence to be gathered, either of the observational or logical kind.

Comment author: 09 January 2013 11:16:16PM *  2 points [-]

So ... how do you tell if this is actually true or not? Without that, it's entirely unclear to me what difference this can make to your knowledge.

Comment author: 09 January 2013 06:16:42PM 2 points [-]

Are you sure that this hypothesis makes no observable predictions?

For one of many possible predictions, I ask whether the tunneling is truly independent of the arrangement of matter and energy in the area. If there is some arrangement that makes it more possible, we should see effects from it. Exotic matter physics experiments seem like a good candidate to create such. Perhaps high energy particle collisions, or Bose-Einstein condensates, or negative temperature quantum gases, Ccassimir effect setups, or something else.

If those experiments, when successful, decrease the measure of their resultant universe, we should expect to see them fail more often than normal.

So, a proposed test: set up an experimental apparatus that flips a quantum coin, and then either performs or doesn't perform the experiment in question. You expect to see the "not performed" result with p > 0.5 in your recorded data.

Of course, the effect may be very weak at that scale, if it "only" reduces the measure of the universe by a factor of 10^12 per second across the entire universe. You might have trouble getting enough of the measure from your experiment to detect something.

(Also, a minor nitpick: the per-microsecond discount rate should be about 0.0028%, as 1.000028^(1E6) ~= 1E12.)

Comment author: 09 January 2013 06:34:21PM 0 points [-]

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

(Also, a minor nitpick: the per-microsecond discount rate should be about 0.0028%, as 1.000028^(1E6) ~= 1E12.)

Urg! Annoying mishap. I will correct it before you have time to read this response.

Comment author: 09 January 2013 06:44:42PM 3 points [-]

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

Are you certain of that? In what other way do you interpret measure that produces a different anticipated experience in this situation? Is there a good article that explains this topic?

Unless I'm missing something, it doesn't matter whether we take measure as probability or not, there will be an asymmetry in the measure between the experiment performed and experiment not performed pathways, when there would not be in the normal quantum coin case. Or are you saying that while the quantum measure is different in the different pathways, we have no way to measure it? If so, then what do you actually mean by quantum measure, given that we can't measure it? (Or is there some other way to measure it, that somehow can't be turned into a similar experimental test?) And, if we can't measure it or any effects from it, why do we believe it to be "real"? What causal pathway could possibly connect to our beliefs about it?

Comment author: 09 January 2013 06:47:05PM 0 points [-]

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

From the article:

None of these assumptions make any difference [...]

Then later go on to offer biting the probability = measure bullet as one possible response. This indicated to me that the quoted statement was intended to be taken as independent of whether you bit that particular bullet.

Comment author: 10 January 2013 06:20:32PM *  2 points [-]

None of these assumptions make any difference to what we'd expect to see observationally

Seems wrong. Vacuum decay would depend on the state of the observable fields, and the condition of non-decay should affect the observed probabilities (rather than observing P(A) we are observing P(A | vacuum didn't decay) ). For a very simple example, if vacuum was pretty stable but "the creation of high-energy particles" triggered the decay, then we couldn't observe that interaction which is triggering decay. Or some particles would be popping out of nowhere from the fluctuations, preventing decay.

Comment author: 11 January 2013 12:34:08PM 0 points [-]

Yes, I'm starting to rethink that. But is still seems that we could have physics A, with vacuum decay, and physics B, without, such that internal observers made the same observation in either case.

Comment author: 11 January 2013 02:47:25PM *  -2 points [-]

Well, yes, but that will break the ceteris paribus for the anthropics.

I'd rather just see it as a different way of mathematically describing the same thing. Greatly simplifying, you can either have a law of X=Y or you can have plurality of solutions inclusive of one with X=Y and an unstable condition where when X!=Y everyone's twins "die". In a sense that's merely two different ways of writing down exact same thing. It might be easier to express gravitation as survivor bias, that would make us use such formalism, but otherwise, the choice is arbitrary. Also, depending to how vacuum decay is triggered, one can obtain, effectively, an objective collapse theory.

With regards to probabilities, your continued existence constitutes incredibly strong evidence that the relevant 'probability' does not dramatically decrease over time.

Comment author: 09 January 2013 07:40:11PM *  0 points [-]

It is just a kind of objective collapse theory. Especially if vacuum decay is gravitational in nature (i.e. is triggered by massive objects). I've been thinking about on and off that since 2005 if not earlier - the many worlds look like the kind of stuff that could be utilized to make a theory smaller by permitting unstable solutions.

edit: to clarify: the decay would result in survivor bias, which would change the observed statistics. If a particle popping up out of nowhere prevents decay in that region, you'll see that particle popping up. Given that any valid theory with decay has to match the observations, it means that the survivor bias will now have to add up to what's empirically known. You can't just have this kind of vacuum decay on top of the laws of physics as we know them. You'd need different laws of physics which work together with the survivor bias to produce what we observe.

Comment author: 25 January 2013 01:38:03PM 1 point [-]

Very cogent comment. Why was it voted down?

Comment author: 09 January 2013 06:43:05PM 1 point [-]

I'll pick door #2, I think...

For instance, would they be in favour of programming an AI's friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?

If you already have an is_friendly() predicate that will kill everyone if the AI isn't friendly, why not make it just shut down the AI and try again? (If you don't have such a predicate, I don't know how you can guarantee the behavior of a random non-friendly AI)

Comment author: 11 January 2013 09:16:29AM 1 point [-]

Whatever weird physics arguments you make, it all adds up to reality. So I would look through the self-consistent theories and choose the one that didn't make me make decisions I disapprove of all the time.

Comment author: 11 January 2013 02:42:28AM 0 points [-]

So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second.

Or a factor of two every 25 milliseconds.

Comment author: 10 January 2013 03:40:47AM 0 points [-]

Comment author: 10 January 2013 12:58:34AM *  -1 points [-]

I think Option 2 comes closest, even if you throw anthropics and many-worlds completely out the window (so let's just hypothesize that there is a 99% probability that all life will die during any given second)

Once you are dead, you will cease to care about your utility function. It won't matter how much fun you had in the moments before you died, nor will it matter what happens after, since everything that matters is gone. Your last words to your loved ones will not matter either, because they will be gone too. There will be nothing.

On the off chance that all life does not end, however, the future continues to matter. You'll care what happens to you in the future, or, if dead, you'll still care about what happens to the people who come after you.

By this logic, you also have to take the "stupid" low collapse bets (if you lose, it's all ending soon so who cares?)

Practically speaking, if you embody this logic, you will not emphasize short term pleasures in the moments before you die. (Stuff like the "last meal" on death row, looking at the sky one last time, drugs, etc). You will only care about long term stuff, like what happens to your loved ones after you die and what message you leave them with.

Actually, I think that's a pretty accurate description of the stuff I imagine myself caring about pre-death. I don't feel that the part of my utility function which only cares about the present moment suddenly gets more weight prior to impending death. So yeah, I'll definitely be doing a modified version of option 2, in which you carry on as normal.

If I know with 100% chance of all life will die, however...well, them I'm actually stumped on how I'm supposed to maximize utility. U = a.short term utility + b.long term utility = a.short term utility + b.null, unable to complete operation... but this is not as pathological as you might imagine, since one never reaches 100% certainty.