Imagine that the universe is approximately as it appears to be (I know, this is a controversial proposition, but bear with me!). Further imagine that the many worlds interpretation of Quantum mechanics is true (I'm really moving out of Less Wrong's comfort zone here, aren't I?).

Now assume that our universe is in a situation of false vacuum - the universe is not in its lowest energy configuration. Somewhere, at some point, our universe may tunnel into true vacuum, resulting in a expanding bubble of destruction that will eat the entire universe at high speed, destroying all matter and life. In many worlds, such a collapse need not be terminal: life could go one on a branch of lower measure. In fact, anthropically, life will go on somewhere, no matter how unstable the false vacuum is.

So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second. We only exist because we're in the branch of measure a trillionth of a trillionth of a trillionth of... all the way back to the Big Bang.

None of these assumptions make any difference to what we'd expect to see observationally: only a good enough theory can say that they're right or wrong. You may notice that this setup transforms the whole universe into a quantum suicide situation.

The question is, how do you go about maximising expected utility in this situation? I can think of a few different approaches:

  1. Gnaw on the bullet: take the quantum measure as a probability. This means that you now have a discount factor of a trillion every second. You have to rush out and get/do all the good stuff as fast as possible: a delay of a second costs you a reduction in utility of a trillion. If you are a negative utilitarian, you also have to rush to minimise the bad stuff, but you can also take comfort in the fact that the potential for negative utility across the universe is going down fast.
  2. Use relative measures: care about the relative proportion of good worlds versus bad worlds, while assigning zero to those worlds where the vacuum has collapsed. This requires a natural zero to make sense, and can be seen as quite arbitrary: what would you do about entangled worlds, or about the non-zero probability that the vacuum-collapsed worlds may have worthwhile life in them? Would the relative measure user also put zero value to worlds that were empty of life for other reasons than vacuum collapse? For instance, would they  be in favour of programming an AI's friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?
  3. Deny the measure: construct a meta ethical theory where only classical probabilities (or classical uncertainties) count as probabilities. Quantum measures do not: you care about the sum total of all branches of the universe. Universes in which the photon went through the top slit, went through the bottom slit, or was in an entangled state that went through both slits... to you, there are three completely separate universes, and you can assign totally unrelated utilities to each one. This seems quite arbitrary, though: how are you going to construct these preferences across the whole of the quantum universe, when forged your current preferences on a single branch?
  4. Cheat: note that nothing in life is certain. Even if we have the strongest evidence imaginable about vacuum collapse, there's always a tiny chance that the evidence is wrong. After a few seconds, that probability will be dwarfed by the discount factor of the collapsing universe. So go about your business as usual, knowing that most of the measure/probability mass remains in the non-collapsing universe. This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards? Also, would you take seemingly stupid bets, like bets at a trillion trillion trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?

 

New to LessWrong?

New Comment
49 comments, sorted by Click to highlight new comments since: Today at 6:55 AM

Dear anthropics: PLEASE STOP HURTING MY HEAD. Thank you.

Sadly, it turns out that ciphergoth is not present to observe any configuration where anthropics doesn't hurt ciphergoth's head. Sorry.

None of these assumptions make any difference to what we'd expect to see observationally:

Shouldn't I expect to live in a young universe? I would expect that scientists would soon uncover evidence that the universe is much younger than they previously believed, barely young enough so that observers such as myself had enough time to come into existence.

Shouldn't I expect to live in a young universe?

If you treat quantum measure as probability, yes. If not... no.

Suppose I told you: I've just pressed a button that possibly reduces your measure by a half. Do you conclude that the button is likely to have failed?

Reducing by just a half might not be enough. But for enough of a reduction, yes.

It seems like there are different kinds of measure involved here. Assuming that quantum measure determines which entity we find ourselves instantiated in (alternatively, "who we are born as") seems distinct, and potentially less defensible than, assuming that quantum measure should determine how we assign future expectations.

Let me just note that you don't need anything exotic like false vacuum for a setup like that, your garden-variety radioactive decay is no different. Your problem is equivalent to being a Schrodinger's cat.

Radioactive decay could happen without killing you. False vacuum is all-or-nothing.

Not if

Your problem is equivalent to being a Schrodinger's cat.

Let's postulate an additional law of physics that says any branch of the wavefunction that tunnels into true vacuum is dropped and the rest is renormalized to measure 1. The complexity penalty of this additional law seems low enough that we'd expect to be in this kind of universe pretty quickly (if we had evidence indicating highly unstable false vacuum). This is sort of covered by #4, I guess, so I'll answer the questions given there.

This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards?

I don't see why that would happen, since the universe has already existed for billions of years. Wouldn't the transition either have happened long ago, or be so smooth that the probabilities are essentially constant within human timeframes?

Also, would you take seemingly stupid bets, like bets at a trillion trillion trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?

I don't think the law of physics postulated above would provide any evidence that you can bet on.

I don't see why that would happen, since the universe has already existed for billions of years. Wouldn't the transition either have happened long ago, or be so smooth that the probabilities are essentially constant within human timeframes?

Yes, realistically. You'd have to have long term horizons, or odd circumstance, to get that kind of behaviour in practice.

I don't think the law of physics postulated above would provide any evidence that you can bet on.

I'm not sure - see some of the suggestions by others in this thread. In any case, we can trivially imagine a situation where there is relevant evidence to be gathered, either of the observational or logical kind.

Perhaps a little reframing is in order. Let's go with shminux's suggestion. You're locked in a room with a cake and a vial of deadly gas. The vial is quantumly unstable, and has a 1/2 measure of breaking every minute. Do you eat the cake now, or later, if you know you'd enjoy it a little more if you ate it in 10 minutes?

"Now" corresponds to not thinking you have quantum immortality
"Later" corresponds to thinking you have quantum immortality

Thee reason I think a reframing like this is better is because it doesn't by construction have a negligible probability if you choose #1 - humans are bad at hypotheticals, particularly when the hypothetical is nearly impossible.

I still pick "later", despite not thinking I have quantum immortality.

As in, my behavior in this scenario won't differ depending on whether my fate hangs on quantum vials or classical coin flips.

Of course, when I say "me" I mean the utility function describing myself that I outlined in the other comment in this thread... In real life I can't really imagine eating cake at a moment like that. I'll try to come up with a more realistic dilemma later to see if my gut instinct matches my utility function model....

I still pick "later", despite not thinking I have quantum immortality

I dunno, your choice is walking in a duck-like manner. Of course, intuition pumps are tricky, and we're usually not consistent with respect to them. For example, if I asked you to lock yourself in that room for ten dollars per minute, you'd refuse. Perhaps some conceptual weirdness stems from the fact that in the thought experiment, there's no point at which you're "let out to find out if you're dead or not."

There may well be an inconsistency, but that particular example doesn't seem to exploit it yet...

U=aPresent+bFutureExpectations

agree, die

U1=aPresent+b*[Expected Future with me Dead]

agree, live

U2=present+b*[alive and $10 richer]

refuse

U3=present+b*[Expected Future with me Alive]

U3 > [.5U2 + .5U1] do not take the deal,

With the cake, however, ExpectedValueFromFutureCake = Null in the case that I am dead, which renders the entire utility function irrelevant. (within the system of the other comment)

Eat cake now (dead or alive) Ux = CakeRightNow + 0

Eat cake later Uy = 0 + ExpectedValueFromFutureCake

Die without eating cake Uz = 0 + null, therefore irrelevant

Ux < Uy so do not eat the cake

What I didn't mention before - as I've outlined it, this utility function won't ever get to eat the cake, since the expected future value is always greater. So there's that flaw. I'm not sure whether this signals that the utility function is silly, or that the cake is silly...maybe both.

However, my utility function is only silly in that you can't even eat the cake before near certain death - I'm guessing your model would have you eat the cake as soon as the probability of your death crossed a certain threshold. But if you were immortal and the cake was just sitting in front of you in all its ever-increasing utility, when would you eat it? The cake will generate a paradox - you always expect more from the cake in the future, yet you will never eat the cake (and once you realize this, your expectations from the cake should drop down to zero - which means you might as well eat it now, but if you wait just a bit longer...)

I think the cake breaks everything and we aught to not use it.

[-]lmm10y10

Dying without eating cake surely has a utility. I mean, suppose I know I'm going to die tomorrow. I still assign different utilities to different ways I could spend today, I don't say the utility of today is null in all cases.

Or are you saying that it's possible to have a silly utility function that doesn't assign any value to eating the cake before dying compared to not eating the cake and then dying at the same time? Sure, but that utility function is silly.

Okay, since I'm one year wiser now, here is a New and improved utility formalization

1) Torture human, and then wipe their memory of the event.

2) Pleasure human, and then wipe their memory of the event.

3) Torture human, and then do not wipe their memory of the event.

4) Pleasure human, and do not wipe their memory of the event.

Rank these in terms of preference from best to work. My rank is 4, 2, 1, 3. You must share my preference ranking for this to work.

You must also accept the following proposition: Death is roughly analogous to a memory wipe.

In January, I tried to escape anthropic panic by saying that "death" by trying to design a utility function which simply ignored possibilities that met certain criteria, while acknowledging that problems arise when you do this.

Today, I'll say death / memory wipe reduce the extent to which the preceding actions matter because said actions no longer have long term repercussions.

So under Stuart_Armstrong's hypothetical, we still continue behaving more or less as normal because if we must all die soon, our actions now matter a great deal less than if we do not die soon. So the sliver of chance in which we do not die must influence our actions a great deal more...an arbitrarily large number more, than the high chance that we do die.

Under this utility function, we can not completely freak out and stop all long term investments if we find out that we are under a false-vacuum which can collapse at any moment and that we've just been lucky so far.

Increasing near-term preference when faced with certain Doom is now fair game, taking into account that Doom decreases the weight of all preferences...so if there is any chance you aren't Doomed, don't throw away your resources on near-term stuff.

So...what happens to the cake now? If the cake doubles in tastiness each minute, but your expectation of being alive to eat it halves each minute, the expected value of the cake remains constant. However, if the pleasure of eating the cake+the memory of the cake lasts longer than an arbitrarily short epsilon time, then if you eat the cake sooner, you'll expect feel the pleasure longer (As in, you are less likely to die before you even get a chance to finish the damn cake and feel satisfied about it)...so you aught to eat the cake immediately. If the rate of pleasure-doubling is lower than half, you don't even need to resort to fancy arguments before you eat the cake.

However, you can still be persuaded to wait forever if the cake increases in tastiness above a certain rate, overpowering both the chance of death and any residual post-cake satisfactions.

TL:DR: You're probably right.. I was attempting to create a utility function that wouldn't freak out if the evidence said that we were doomed and anthropics rendered the "we haven't died yet so we're probably safe" argument out the window, without resorting to anthropic immortality, while also behaving more or less normally in most scenarios. I doubt I succeeded at this. I've got a long winded explanation below, but feel free not to read it.

It's been a year since I made this comment, and there's a lot of context to catch up on. Reading this over, here is what I think this is going on in this conversation.

In the other comment I tried to make the case that our behavior would not change even if we constantly had a 99% chance of total universe annihilation every day, regardless of whether it was quantum or classical. As in, we shouldn't all switch to near-term thinking just because physicists tell us that we face a high chance of total annihilation every day and there is nothing we can do about it.

Why? Because in the universe where everything is annihilated, it doesn't matter what we do now. What we do now only matters in the case that we survive. Thus, even if there is only a small chance of survival, we should behave under the assumption that we survive.

Now, the above isn't necessarily true for all utility functions, or even true for you. It's just how I would wish behave if I was told that false vacuum has a 99% chance of occurring every day, and the only reason we're still here is classical luck / quantum immortality. I wouldn't want to switch to near-mode thinking and rush around fulfilling short term goals just because I heard that news. My intuition says that, when faced with the probability of unavoidable doom, it should not alter my behavior no matter how large that probability, particularly in the case where the Doom is a chronic, continuing phenomenon. If you knew you had a high chance of dying, you'd tell your loved ones that you loved them, because they'd live on and those words would have an effect. But if everyone, including those loved ones, is probably doomed all the time...well, it seems best to just continuing on as usual.

So now, I have to attempt to formalize my preferences in terms of a utility function. The essence that must be represented is that in the event that certain scenarios occur, certain decisions become irrelevant, and so for the purpose of deciding between certain choices you should just pretend those scenarios can't happen.

A real world example where there is a chance of something that can render a choice irrelevant: You need to decide whether to bet or fold your hand in poker game. You do not need to consider the probability that the fire alarm will ring and disrupt the game, because in the event that this happens your choice is irrelevant anyway. (This does not identical to the cake/false vacuum scenario, because both folding and betting are future orientated actions. It is only an example of where a choice becomes irrelevant)

I attempted to do this by assigning "null" value to those events. By doing this, my intuition was aligned with my formalized utility function in the case of Stuart Armstrong's. Then Manfred comes along and creates a new scenario. Like the previous scenario, it examines how the likelihood of one's death effects the choice between a short term preference and a long term preference. It is, in theory, identical to the first scenario. Therefore, the utility function formalized in the other comment should in theory behave the same way, right? That utility function would choose to eat the cake later.

I pointed this out to Manfred. He then claimed:

I dunno, your choice is walking in a duck-like manner. Of course, intuition pumps are tricky, and we're usually not consistent with respect to them. For example, if I asked you to lock yourself in that room for ten dollars per minute, you'd refuse. Perhaps some conceptual weirdness stems from the fact that in the thought experiment, there's no point at which you're "let out to find out if you're dead or not."

What Manfred means is that, if I always choose my actions under the assumption that I survive, then I must be willing to put myself in danger (since I attempted to consider the situations where I die / humanity ends as irrelevant to my decision making).

My comment you replied to points out that Manfred is wrong about this because the utility function as formalized still prefers a future where I do not end / humanity does not end to the alternative.

Your criticism

I still assign different utilities to different ways I could spend today

is completely valid. actually I acknowledged this in the comment you replied to:

my utility function is only silly in that you can't even eat the cake before near certain death

Meaning, yes, there are some major problems with this (which you pointed out) but Manfred's criticism that this utility function puts itself in dangerous situations is not one of them. Also, it's worth noting that in the absence of certain death, no one can ever eat the cake...so I'm not sure if the weirdness is due to my formalization.. or if it's just inheriting weird behavior from the pathological properties of the cake.


So now you understand the problem: I was trying to create a utility function which would cause someone to not significantly change their behavior in Stuart_Armstrong's scenario without making appeals to spooky anthropic immortality.

But as you pointed out, it's flawed because real humans do increase weight on short term hedons when they think they are about to die. (Although you might argue this preference is based off the false belief that death = sensory deprivation)


So, just to clarify, here is the "normal" utility function which Manfred implicitly assumes

Eat cake now: U(now) = CurrentCakeTastiness

Postpone eating cake for one minute: U(postpone) = .5futureCakeTastiness + .5(zero, because you are dead and can't eat it)

This utility function has a tendency to eat the cake right away - though if the difference between the future cake and current cake is drastic enough, it could be persuaded to wait forever.

This utility function also rushes around and fulfills short term goals if physicists tell us that our universe has a 99% chance of ending every day. This behavior is NOT my preferences.

Here is the utility function which I postulated

Eat cake now: U(now) = CurrentCakeTastiness

Postpone eating cake for one minute: U(postpone) = futureCakeTastiness + (null, because you'll be dead and it won't matter anyway. The probability of this option is discounted.)

This utility function has a tendency to not eat the cake right away. I do not know if these actions mirror my preferences - the situation is too alien for my intuition.. However, this utility function also has the property of behaving normally if physicists tell us that our universe has a 99% chance of ending every day, and I consider this to mirror my preferences**.


Let's make the cake less alien by replacing the cake with something I'd actually care about in this situation.

Eat Cake = save C people from torture. There are a finite number of humans. If you don't save them, they get tortured until they die.

Every minute of not eating cake, the number of people you can save from torture increases (linearly? Exponentially? Asymptotically approaching a number? Depending on the function you might be able to save everyone from torture.)

X% chance of everyone involved dying every minute. When humans die via some other cause, a new human is born to replace them (and be tortured, or not, depending on your choices.) The population remains constant.

Now, the cake example perfectly mirrors Stuart Armstrong's example, without forcing you to accept a cake which you can never eat into your utility function.

If X=0%, I think you'd want to wait until you could save everyone if it was possible to do so. Failing that (say, C never exceeds 1/2 the human population) you'd want to at least wait until C was pretty damn close to the maximum.

If X = 100%, my formalized utility function says you'd eat the cake right away. That seems intuitive enough.

If X was between 0% and 100%, how would you behave?

My formalized utility function says that you would behave identically to if X was 0%. Is this silly and inhuman? Maybe... probably. I'm not certain that it's silly because I haven't come up with an example where it is obviously silly, but this is probably due to lack of thinking about it sufficiently. (Related question: Is the human increase in short-term preference under high chance of death a real preference or just an artifact of thinking that death in analogy to "going away"?)

Manfred's implicit utility function says your behavior would take some intermediate form, unless you believed in Quantum Immortality and thought X% was decided by a quantum dice, in which case you would behave identically to if X was 0%. I think the quantum portion of this is silly - even under Many Worlds, current-you aught to multiply your enjoyment by the number of future-you's that are there to enjoy it. Is it still silly in the classical scenario, where you start shifting to short-term for all preferences which become unfulfilled after death? I don't know, but it leads to some conclusions I don't like.

It is, of course, possible that I'm actually engaging in biased thinking here - as in, the reason I think that I prefer to ignore the possibility that we live in a universe where there is a false vacuum that might collapse at any moment because behaving as if this is true is stressful.

None of these assumptions make any difference to what we'd expect to see observationally

Seems wrong. Vacuum decay would depend on the state of the observable fields, and the condition of non-decay should affect the observed probabilities (rather than observing P(A) we are observing P(A | vacuum didn't decay) ). For a very simple example, if vacuum was pretty stable but "the creation of high-energy particles" triggered the decay, then we couldn't observe that interaction which is triggering decay. Or some particles would be popping out of nowhere from the fluctuations, preventing decay.

[-]lmm10y30

Ridiculous idea: maybe that's why we don't see any superpartners in particle accelerators.

Or more interestingly it may occur in very ordinary circumstances - the vacuum does not have to be even metastable. Think of an (idealized) pin standing on it's tip, on a glass plane. Suppose that whole thing is put on a moving train - for any train trajectory, there is an initial position for the pin so that it will not fall. That pin would seem to behave quite mysteriously - leaning back just before the train starts braking, etc - even though the equations of motion are very simple. Seems like a good way to specify apparently complicated behaviours compactly and elegantly.

(edit2: Rather than seeing it as worlds being destroyed, I'd see this as an mathematically elegant single world universe, or a mathematically elegant way to link quantum amplitudes to probabilities (which are the probabilities that the one surviving world will have such and such observations) )

Yes, I'm starting to rethink that. But is still seems that we could have physics A, with vacuum decay, and physics B, without, such that internal observers made the same observation in either case.

Well, yes, but that will break the ceteris paribus for the anthropics.

I'd rather just see it as a different way of mathematically describing the same thing. Greatly simplifying, you can either have a law of X=Y or you can have plurality of solutions inclusive of one with X=Y and an unstable condition where when X!=Y everyone's twins "die". In a sense that's merely two different ways of writing down exact same thing. It might be easier to express gravitation as survivor bias, that would make us use such formalism, but otherwise, the choice is arbitrary. Also, depending to how vacuum decay is triggered, one can obtain, effectively, an objective collapse theory.

With regards to probabilities, your continued existence constitutes incredibly strong evidence that the relevant 'probability' does not dramatically decrease over time.

The problem with the many worlds interpretation and in particular with the quantum suicide thought experiment is that all sorts of completely arbitrary ridiculous things can happen. There should be a me who spontaneously grows extra arms, or cat ears and a tail, or turns into a robot, or finds himself suddenly on Mars without a spacesuit, or .... and yet here I am, with nothing like that ever having happened to me. If my survival ever requires something extraordinarily unlikely to happen in order to keep me going, just like a bad bill in congress (all of them), it's inevitably going to be attached to something else which is not, shall we say, status quo preserving.

As it relates to this scenario, imagine an Earth about to be annihilated by the oncoming shockwave of the expanding bubble of vacuum decay. Whether it is destroyed a moment later or not isn't a binary decision. That shockwave is there, like it or not, and there won't be an alternate universe that splits off from one which is later destroyed, which is not later destroyed. That bubble isn't going away in any universe, it's there, and it's coming. The only way the quantum suicide thought experiment can work in the way you describe is if we always find ourselves in a universe where the bubble nucleation event never happened in the first place, whether it happened a billion light years away a billion years ago, or in an alien laboratory on alpha centauri 4.3 years ago. But if you apply this logic to ANY bad thing that happens, you can see that it clearly fails. One should expect one to always find oneself in a universe where nothing bad ever happens to oneself. I was hit by a car and got brain damage among other physical damage. No quantum kung fu thought experiment saved me from that. My identity CHANGED that moment. I have lost about 50 IQ points. When I changed, that other me died. If I were to die all the way, that would just be a change. That's all death ever is. A change. Simply a more drastic one than the usual bad things that happen to oneself. Where do you draw the line? Is something special about death, is that the threshold? This is why I say the quantum suicide thought experiment makes no sense to begin with. If that bubble of deeper vacuum is nucleated, our fate at that moment is SEALED, and no alternate universe splitting will save us. For your proposition to work, some force would have to stop bad things from ever BEGINNING to happen to us. And there is no such force at work. Believe me, I know from experience.

I guess I should say something though - there actually IS a tiny, tiny possibility for a massive nucleated bubble of deeper vacuum to just rearrange its energy distribution and just all go away, since anything which can happen in the forward direction can be undone, with an extraordinary amount of luck. So if you're a many-worlds purist, I guess I haven't really disproved it. But again, I would still expect to live on a much stranger world than the one that's here if I always lived on one of the universe branches that contrived itself to survive somehow. Just like, if you were to find ice spontaneously form in a pot of boiling water, it would more likely than not, not be an ice CUBE. And just like, if this were a poincare recurrence universe, there should be things observable in the universe which are not consistent with a universe that directly developed from a big bang, there should be weird things out of place.

Are you sure that this hypothesis makes no observable predictions?

For one of many possible predictions, I ask whether the tunneling is truly independent of the arrangement of matter and energy in the area. If there is some arrangement that makes it more possible, we should see effects from it. Exotic matter physics experiments seem like a good candidate to create such. Perhaps high energy particle collisions, or Bose-Einstein condensates, or negative temperature quantum gases, Ccassimir effect setups, or something else.

If those experiments, when successful, decrease the measure of their resultant universe, we should expect to see them fail more often than normal.

So, a proposed test: set up an experimental apparatus that flips a quantum coin, and then either performs or doesn't perform the experiment in question. You expect to see the "not performed" result with p > 0.5 in your recorded data.

Of course, the effect may be very weak at that scale, if it "only" reduces the measure of the universe by a factor of 10^12 per second across the entire universe. You might have trouble getting enough of the measure from your experiment to detect something.

(Also, a minor nitpick: the per-microsecond discount rate should be about 0.0028%, as 1.000028^(1E6) ~= 1E12.)

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

(Also, a minor nitpick: the per-microsecond discount rate should be about 0.0028%, as 1.000028^(1E6) ~= 1E12.)

Urg! Annoying mishap. I will correct it before you have time to read this response.

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

Are you certain of that? In what other way do you interpret measure that produces a different anticipated experience in this situation? Is there a good article that explains this topic?

Unless I'm missing something, it doesn't matter whether we take measure as probability or not, there will be an asymmetry in the measure between the experiment performed and experiment not performed pathways, when there would not be in the normal quantum coin case. Or are you saying that while the quantum measure is different in the different pathways, we have no way to measure it? If so, then what do you actually mean by quantum measure, given that we can't measure it? (Or is there some other way to measure it, that somehow can't be turned into a similar experimental test?) And, if we can't measure it or any effects from it, why do we believe it to be "real"? What causal pathway could possibly connect to our beliefs about it?

So, a proposed test:

That test only works if you take quantum measure as probability in the first place.

From the article:

None of these assumptions make any difference [...]

Then later go on to offer biting the probability = measure bullet as one possible response. This indicated to me that the quoted statement was intended to be taken as independent of whether you bit that particular bullet.

A similar problem can be created with other scenarios. For instance, suppose you are planning on spending all day doing some unpleasant activity that will greatly benefit you in the future. Omega tells you that some mad scientist plans on making a very large amount of independent lockstep-identical brain* emulators of you that will have the exact same experiences you will be having today, and then be painlessly stopped and deleted after the day is up (assume the unpleasant activity is solitary in order to avoid complications about him having to simulate other people too for the copies to have truly identical experiences).

Should you do the unpleasant activity, or should you sacrifice your future to try to make your many-copied day a good one?

I'm honestly unsure about this and it's making me a little sick. I don't want to have to live a crappy life because of weird anthropic scenarios. I have really complicated, but hopefully not inconsistent, moral values about copies of me, especially lockstep-identical ones, but I'm not sure how to apply them here. Generally I think that lockstep-identical copies whose lifetime utility is positive don't add any value (I wouldn't pay to create them), but it seems wrong to apply this lockstep-identical copies with negative lifetime utility (I might pay to avoid creating them). It seems obviously worse to create a hundred tortured lockstep copies than to create ten.

One fix that would allow me to act normally would be to add a stipulation to my values that in these kind of weird anthropic scenarios where most of my lockstep copies will die soon (and this is beyond my control), I get utility from taking actions that allow the whichever copies to survive to live good lives. If I decide to undergo the unpleasant experience for my future benefit, even if I have no idea if I'm going to be a surviving copy or not (but am reasonably certain there will be at least some surviving copies), I get utility that counterbalances the unpleasantness.

Obviously such a value would have to be tightly calibrated to avoid generating as crazy behavior as the problem I devised it to solve. It would have to only apply in weird lockstep anthropic scenarios and not inform the rest of my behavior at all. The utility would have to be high enough to counterbalance any dis-utility all of the mes would suffer, but low enough to avoid creating an incentive to create suffering-soon-to-die-lockstep-identical copies. It would also have to avoid creating an incentive for quantum suicide. I think it is possible to fit all these stipulations.

In fact, I'm not sure it's really a severe modification of my values at all. The idea of doomed mes valiantly struggling to make sure that at least some of them will have decent lives in the future has a certain grandeur to it, like I'm defying fate. It seems like there are far less noble ways to die.

If anyone has a less crazy method of avoiding these dilemma's though, please, please, please let me know. I like Wei Dai's idea, but am not sure I understand MWI enough to fully get it. Also, I don't know if it would apply to the artificially-created copy scenario in addition to the false vacuum one..

*By "lockstep" I mean that the copy will not just start out identical to me. It will have identical experiences to me for the duration of its lifetime. It may have a shorter lifespan than me, but for its duration the experiences will be the same (for instance, a copy of 18 year old me may be created and be deleted after a few days, but until it is deleted it will have the same experiences as 18 year old me did).

If anyone has a less crazy method of avoiding these dilemma's though, please, please, please let me know.

Ignore them?

Why do you need answers to these questions, so intensely that being unsure is "making [you] a little sick"? There is no Omega, and he/she/it is not going to show up to create these scenarios. What difference will an answer make to any practical decision in front of you, here and now?

There is no Omega, and he/she/it is not going to show up to create these scenarios. What difference will an answer make to any practical decision in front of you, here and now?

While Omega is not real, it seems possible that naturally occurring things like false vacuum states and Boltzmann brains might be. I think that the possibility those things exist might create similar dilemmas, am disturbed by this fact, and wish to know how to resolve them. I'm pretty much certain there's no Omega, but I'm not nearly as sure about false vacuums.

[-]lmm10y00

I'm reminded of this part of The Moral Void.

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Maybe you should hope that morality isn't written into the structure of the universe. What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?

So ... how do you tell if this is actually true or not? Without that, it's entirely unclear to me what difference this can make to your knowledge.

I'll pick door #2, I think...

For instance, would they be in favour of programming an AI's friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?

If you already have an is_friendly() predicate that will kill everyone if the AI isn't friendly, why not make it just shut down the AI and try again? (If you don't have such a predicate, I don't know how you can guarantee the behavior of a random non-friendly AI)

Whatever weird physics arguments you make, it all adds up to reality. So I would look through the self-consistent theories and choose the one that didn't make me make decisions I disapprove of all the time.

[-][anonymous]11y10

So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second.

Or a factor of two every 25 milliseconds.

[-]lmm10y00

I quite like option 1. I've always been intuitively uncomfortable with the notion that I should e.g. save money and then use it altruistically just before I die rather than using money altruistically now.

I think Option 2 comes closest, even if you throw anthropics and many-worlds completely out the window (so let's just hypothesize that there is a 99% probability that all life will die during any given second)

Once you are dead, you will cease to care about your utility function. It won't matter how much fun you had in the moments before you died, nor will it matter what happens after, since everything that matters is gone. Your last words to your loved ones will not matter either, because they will be gone too. There will be nothing.

On the off chance that all life does not end, however, the future continues to matter. You'll care what happens to you in the future, or, if dead, you'll still care about what happens to the people who come after you.

By this logic, you also have to take the "stupid" low collapse bets (if you lose, it's all ending soon so who cares?)

Practically speaking, if you embody this logic, you will not emphasize short term pleasures in the moments before you die. (Stuff like the "last meal" on death row, looking at the sky one last time, drugs, etc). You will only care about long term stuff, like what happens to your loved ones after you die and what message you leave them with.

Actually, I think that's a pretty accurate description of the stuff I imagine myself caring about pre-death. I don't feel that the part of my utility function which only cares about the present moment suddenly gets more weight prior to impending death. So yeah, I'll definitely be doing a modified version of option 2, in which you carry on as normal.

If I know with 100% chance of all life will die, however...well, them I'm actually stumped on how I'm supposed to maximize utility. U = a.short term utility + b.long term utility = a.short term utility + b.null, unable to complete operation... but this is not as pathological as you might imagine, since one never reaches 100% certainty.

How could humanity have evolved a morality that cares about this?

It is just a kind of objective collapse theory. Especially if vacuum decay is gravitational in nature (i.e. is triggered by massive objects). I've been thinking about on and off that since 2005 if not earlier - the many worlds look like the kind of stuff that could be utilized to make a theory smaller by permitting unstable solutions.

edit: to clarify: the decay would result in survivor bias, which would change the observed statistics. If a particle popping up out of nowhere prevents decay in that region, you'll see that particle popping up. Given that any valid theory with decay has to match the observations, it means that the survivor bias will now have to add up to what's empirically known. You can't just have this kind of vacuum decay on top of the laws of physics as we know them. You'd need different laws of physics which work together with the survivor bias to produce what we observe.

Very cogent comment. Why was it voted down?