# How Many LHC Failures Is Too Many?

Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.

Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.

So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?

After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?

## Comments (111)

Old"But if it comes up heads 100 times, it's taking you too long to notice"

Ros. Heads. (He puts it in his bag. The process is repeated.) Heads. (Again.) Heads. (Again.) Heads. (Again.) Guil. (Flipping a coin) There is an art to the building of suspense. Ros. Heads. Guild. (Flipping another) Though it can be done by luck alone. Ros. Heads. Guil. If that's the word I'm after. Ros. (Raises his head) 76! (Guil gets up but has nowhere to go. He spins the coin over shoulder without looking at it.) Heads Guil. A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability. (He flips a coin back over his shoulder.) Ros. Heads. Guil. (Musing) The law of probability, it has been asserted, is something to do with the proposition that if six monkeys - (He has surprised himself) if six monkeys were. . . Ros. Game? Guil. Were they? Ros. Are you?

-- Rosenkrantz & Guildenstern Are Dead, Tom Stoppard, Act I

Might want to reformat that, looks like markdown did you in.

Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?

I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.

On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theory, that seems to indicate that applying it isn't optimal (if your goal is to maximize the number of world-branches with good outcomes), but realized that humans are able to observe other humans and what sort of things tend to kill them, along with hearing about those things from other humans when we grow up, so we're almost never having close calls with death frequently enough to need to apply the anthropic principle. If a human were exploring an unknown environment with unknown dangers by themselves, and tried to consider the anthropic principle... that would be pretty terrifying.

John Cramer wrote a novel with an anthropic explanation for the cancellation of the SSC:

http://www.amazon.com/Einsteins-Bridge-John-Cramer/dp/0380788314

Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?

It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.

But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explanation. If it's mechanically sound and

stillmiraculously failing every time the dials get turned up high, it's likely enough to consider."After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

Not sure what is meant by that.

Another thought. Suppose a functioning LHC does in fact produce world-destroying scenarios. Would we see: A) an LHC with mechanical failures? or B) an LHC where all collisions happen except world-destroying ones? If B, would the LHC be giving us biased experimental results?

I'm confused by your last comment - what use would the LHC be in a global economic crisis or nuclear war? I don't suppose you mean something like "rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to *observe* the scenario in which the market does recover" or something like that, do you?

I think the idea is you only run it if you're already indifferent to the world being destroyed?

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.

Sorry, make that "happened not to build one that worked".

Say our prior odds for the LHC being a destroyer of worlds are a billion to one against. Then this hypothesis is at negative ninety decibels. Conditioned on the hypothesis being true, the probability of observing failure is near unity, because in the modal worlds where the world really is destroyed, we don't get to make an observation--or we won't get to remember it very long. Say that conditioned on the hypothesis being false, the probability of observing failure is one-fifth--this is very delicate equipment, yes? So each observation of failure gives us 10log(1/0.2), or about seven decibels of evidence for the hypothesis. We need ninety decibels of evidence to bring us to even odds; ninety divided by seven is about 12.86. So under these assumptions it takes thirteen failures before we believe that the LHC is a planet-killer.

First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.

But if I'm not mistaken, even old failures will

becomeevidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.As much as the AP fascinates me, it does my head in. :)

Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.

You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.

At the risk of asking the obvious:

Does the fact that no one has yet succeeded in constructing transhuman AI imply that doing so would necessarily wipe out humanity?

No.

Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.

Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.

On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.

On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.

On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.

Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terrorist attack. Nothing has changed.

Yvain 10 is worse off than he would have been otherwise. If not for the LHC, he would be recovering from a terrorist attack, which is bad but not apocalyptically so. Now he's dead. There's no sense in which his spirit has been averaged out over Yvains 1 through 9. He's just plain dead. That can hardly be considered an improvement.

Since it doesn't help anyone and it does kill a large number of people, I'd advise CERN against using LHC-powered anthropic tricks to "prevent" terrorism.

Unless you just consider it a

Mouse That Roaredscenario in which no one dares commit a terrorist attack under threat of global annihilation.(just read the book, it's well worth it)

Blowing up the world in response to terrorist attack is like shooting yourself in the head when someone steps on your foot, to make subjective probability of your feet being stepped on lower.

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)I can only see this statement making any sense if you think we should behave as if nature first randomly picked a value of a global cross-world time parameter, then randomly picked an observer (in any world) alive at that time, and that observer is you. (Actually I can't see it making any sense even then.) But that's not thinking 4D! Choosing a random observer in all of spacetime makes much more sense.

Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"This remark may be somewhat premature

Uh, isn't it actually

nonsense? The anthropic principle is supposed to explain how you got lucky enough to exist at all, not how you got lucky enough to keep existing.The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

Strictly speaking, how does one randomize a list in linear time?

Even picking a uniformly-randomized list from all possible sequences is out of reach for us under most scenarios with reasonably long lists.

*0 points [-]A uniform randomization may not be possible but you can get an arbitrarily well randomized list in linear time. That is all that is needed for the purposes of the sorting. (You would just end up destroying the world 1 + (1 / arbitrarily large) as many times as with a uniform distribution.)

*2 points [-]Algorithms like a modified Fisher-Yates shuffle in linear time if you're just measuring reads and writes, but O(lg(n!)) > O(n) bits are required to specify which permutation is being chosen, so unless generating random numbers is free, shuffling is always O(n log n) .

In real life, we don't use PRNGs with sufficiently long cycle times, so we usually get linear-time shuffles by discarding the vast majority of the potential orderings.

*2 points [-]That seems to be a rational decision for people with certain value systems. Specifically, those that don't care about their quantum measure. (Yes, that value system is at least as insane as Clippy's.)

"Quantum Sour Grapes" seems like a suitable label for the strategy. ;)

*0 points [-]It just occurred to me that you would want to be REALLY careful that there wasn't a bug in either your shuffling or list checking code.

If you started using quantum suicide for all your problems eventually you'd make a mistake. :)

If I'm following the reasoning (if "reasoning" is in fact the right word, which I'm unconvinced of), you wouldn't make any world-destroying mistakes that it's possible for you not to make, since only the version of you that (by chance) made no such mistakes would survive.

And, obviously, there's no point in even trying to avoid world-destroying mistakes that it's not possible for you not to make.

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.Maybe it's stupid and evil, but what stops it from actually working?

I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.

As others have noted, it seems straightforward to use Bayes' rule to decide when to believe how much that LHC malfunctions were selection effects - the key question is the prior. As to the last question, even if I was confident I lived in an infinite universe and so there was always some version of me that lived somewhere, I still wouldn't want to kill off most versions of me. So all else equal I'd never want to fire the LHC if I believed doing so killed that version of me.

Brilliant post.

I almost want it to fail a few more times so that the press latch on to this idea. Imagine journalists trying to A) understand and b) articulate the anthropic principle across many worlds. Would be hilarious.

Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.

This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes' rule.

Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher under the hypothesis that the LHC would destroy the Earth if it did not fail, but you didn't take into account the fact that the probability of survival is itself lower under that hypothesis (i.e. the anthropic principle).

To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.

If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.

Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.

My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true, so after many mechanical failures I would rather believe the first hypothesis than the second one.

Simon:

the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.But - if the LHC was Earth-fatal - the probability of

observinga world in which the LHC was brought fully online would be zero.(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)

Allan, I am of course aware of that (actually, it would probably take time, but even if the annihilation were instantaneous the argument would not be affected).

There are 4 possibilities:

1. The LHC would destroy Earth, but it fails to operate 2. The LHC destroys Earth 3. The LHC would not destroy Earth, but it fails anyway 4. The LHC works and does not destroy Earth

The fact that conditional on survival possibility 2 must not have happened has no effect on the relative probabilities of possibility 1 and possibility 3.

But the destruction of the earth in case of creation of blackhole or a stranglet will not be instantaneous, like on YouTube movie.

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it. The main hurting thing of BH will be its energy realize. And if BH is in the cenre of the earth , this energy will go out as violent volcanic eruptions.

Because of exponential grouth of BH the bigest part of energey will be realized in the last years of its existence.

It means that in fisrt years we could not even mention that BH is created.

It measns that probably BH is already created on previus collider RHIC, but we still do not see its manifestations.

So, this antropic principlle would work only in case of vacuum transition.

But we should not afraid it because in Multiverse we will always survive in some worlds.

And continues failtures of LHC will prove that this way of immortality is valid and we should not wary about existential risks at all.

"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"After observing 100 failures in a row I would expect that a failure would occur after the next attempt to switch it on too. So it doesn't seem as a reliable means to prevent terrorism or economic crash even if anthropic multi-world "ideology" were true.

On the other hand, if somebody were able to show that the amplitude of LHC's unexpected failure for technical reasons was significantly lower than the amplitude of terrorist-free future...

Incorrect reasoning; every branching compatible with sentient organisms contains sentient organisms monitoring its conditions.

The organisms that are in branchings in which LHC facilities were built perceive themselves to be in such a world, no matter how improbable it is. It doesn't matter if it's quite unlikely for you to win a lottery -- if you do win a lottery, you'll eventually accumulate enough data to conclude that's precisely what's happened.

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it.I am curious about these assumptions. BH with mass of the whole Earth has the Schwartzschild radius about 1cm. At start the BH should be much lighter, so it's not clear to me how could this BH, sitting in the centre of Earth, eat anything.

simon,

Actually, I think it might (though I'm obviously open to correction) if you take the anthropic principle as a given (which I do not).

One thing you're missing is that there are two events here, call them A and B:

A. LHC would destroy earth B. LHC works

So the events, which are NOT independent, should look more like:

1. The LHC would destroy earth, and it fails to operate 2. The LHC would destroy earth, and it works 3. The LHC would not destroy Earth, and it fails to operate 4. The LHC would not destroy Earth, and it works

Outcome 2 is "closer" to outcome 1. More precisely, evidence that 2 occured would increase our probability of both A and B, which would therefore decrease the probability of event 3 relative to event 1.

The fact that 2 is invisible means that we can't tell when it has happened. But there is a chance that it is happening that would increase with each subsequent failure, as Eliezer noted.

This is far from formal but I hope I'm getting the gist across.

Robinson, I could try to nitpick all the things wrong with your post, but it's probably better to try to guess at what is leading your intuition (and the intuition of others) astray.

Here's what I think you think:

1. Either the laws of physics are such that the LHC would destroy the world, or not. 2. Given our survival, it is guaranteed that the LHC failed if the universe is such that it would destroy the world, whereas if the universe is not like that, failure of the LHC is not any more likely than one would expect normally. 3. Thus, failure of the LHC is evidence for the laws of physics being such that the LHC would destroy the world.

This line of argument fails because when you condition on survival, you need to take into account the different probabilities of survival given the different possibilities for the laws of the universe. As an analogy, imagine a quantum suicide apparatus. The apparatus has a 1/2 chance of killing you each time you run it and you run it 1000 times. But, while the apparatus is very reliable, it has a one in a googol chance of being broken in such a way that every time it will be guaranteed not to kill you, but appear to have operated successfully and by chance not killed you. Then, if you survive running it 1000 times, the chance of it being broken in that way is over a googol squared times more likely than the chance of it having operated successfully.

Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".

Finally a note about the anthropic principle: it is simply the application of normal probability theory to situtations where there are observer selection effects, not a special separate rule.

I'm with Brian Jaress, who said, 'I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?"' OTOH, I have a very poor head for probabilities, Bayesian or otherwise, and in fact the Monty Hall thing still makes my brain hurt. So really, I make a lousy "me too" here.

That said: Could someone explain why repeated mechanical failures of the LHC should in any way imply the likelihood of it destroying the world, thus invoking the anthropic principle? Given the crowd, I'm assuming there's more to it than "OMG technology is scary and it doesn't even work right!", but I'm not seeing it.

Okay, it scares me when I realize that I've been getting probability theory wrong, even though I seemed to be on perfectly firm ground. But I'm finding that it's even more scary that even our hosts and most commenters here seem to be getting it backwards -- at least Robin; given that the last question in the post seems so obviously wrong for the reasons pointed out already, I'm starting to wonder whether the post is meant as a test of reasoning about probabilities, leading up to a post about how Nature Does Not Grade You On A Curve (grumble :)). Thanks to simon for pointing out the flaw -- I didn't see it myself.

Since simon's explanation is apparently failing to convince most other people here, let me try my own:

As Robinson points out, there are two underlying events. (A): The laws of physics either mean that a working LHC would destroy the world, or that it wouldn't; let p_destroyer denote our subjective prior probability that it would destroy the world. (B): Either something random happens that prevents the LHC from working, or it doesn't. There is an objective Born probability here that a randomly chosen Everett branch of future Earth at date X will have had a string of failures that kept the LHC from working. We should really consider a subjective probability distribution over these objective probabilities, but let us just consider the resulting subjective probability that a randomly chosen Everett branch will not have had a string of failures preventing LHC from working -- call it p_works.

Now, at date X, in a randomly chosen Everett branch, there are four possibilities:

1. The LHC would destroy Earth, and it fails to operate; p = p_destroyer * (1 - p_works). 2. The LHC would destroy Earth, and it works. p = p_destroyer * p_works. 3. The LHC would not destroy Earth, and it fails to operate. p = (1 - p_destroyer) * (1 - p_works). 4. The LHC would not destroy Earth, and it works. p = (1 - p_destroyer) * p_works.

Now, we cannot directly observe whether he LHC *would* destroy Earth if turned on; what we actually can "observe" in a randomly chosen Everett branch at date X is which of the following three events is true:

i. The LHC is turned on and working fine. (Aka "case 4") ii. The LHC is not turned on, because there has been a string of random failures. (Aka "case 1 OR case 3") iii. Earth is gone. (Aka "case 2")

Of course, in case iii aka 2, we are not actually around to observe -- thus the scare quotes around "observe."

simon's argument is that if we observe case ii aka "1 OR 3" aka "a string of random failures has prevented the LHC from working up to date X", then our posterior probability of "The LHC would destroy Earth if turned on" is equal to our prior probability of that proposition (i.e., to p_destroyer):

p(case 1 OR case 3) = p(case 1) + p(case 3) = p_destroyer * (1 - p_works) + (1 - p_destroyer) * (1 - p_works) = 1 - p_works

p(case 1 | the LHC would destroy Earth) = p(the LHC would destroy Earth AND it fails to operate | the LHC would destroy Earth) = 1 - p_works p(case 3 | the LHC would destroy Earth) = p(the LHC would NOT destroy Earth AND it fails to operate | the LHC WOULD destroy Earth) = 0 p(case 1 OR case 3 | the LHC would destroy Earth) = p(case 1 | the LHC would destroy Earth) + p(case 3 | the LHC would destroy Earth) = (1 - p_works + 0) = 1 - p_works

p(the LHC would destroy Earth | case 1 OR case 3) = p(case 1 OR case 3 | the LHC would destroy Earth) * p(the LHC would destroy Earth) / p(case 1 OR case 3) = (1 - p_works) * p_destroyer / (1 - p_works) = p_destroyer

- Benja

The intuition behind the math: If the LHC would

destroy the world, then on date X, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures, and most Everett branches have the LHC happily chugging ahead. If the LHCnotdestroy the world, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures -- and most Everett branches have Earth munched up into a black hole.wouldThe very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases.Thus, if all you know is that

youare in an Everett branch in which the LHC is non-working due to a string of random failures, you have no information about whether theotherEverett branches have the LHC happily chugging ahead, or dead.I'm going to try another explanation that I hope isn't too redundant with Benja's.

Consider the events

W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)

We want to know P(W|F) or P(W|F,S), so let's apply Bayes.

First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

Bayes:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).

(I suppose one could argue that a failure could be caused by a new law of physics that would also lead the LHC to destroy the Earth, but that isn't what is being argued here - at least so I think; my apologies to anyone who is arguing that)

In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.

Benja, I also think of it that way intuitively. I would like to add though that it doesn't really matter whether you have branches or just a single nondeterministic world - Bayes' theorem applies the same either way.

Benja: Good explanation! Intuitively, it seems to me that your argument holds if there are Tegmark IV branches with different physical laws, but not if whether the LHC would destroy Earth is fixed across the entire multiverse. (Only in the latter case, if it would destroy the Earth, the objective frequency of observations of failure - among observations, period - would be 1.)

Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:

The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.

Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due to a string of random failures, you have no information about whether the other Everett branches have the LHC happily chugging ahead, or dead.The intuition on my side is that, if you consider yourself a random observer, it's amazing that you should find yourself in one of the extremely few worlds where the LHC keeps failing, unless the LHC is dangerous, in which case all observers are in such a world.

(I would like to stress for posterity that I don't believe the LHC is dangerous.)

Simon's last comment is well said, and I agree with everything in it. Good job, Simon and Benja.

Although the trickiest question was answered by Simon and Benja, Eliezer asked a couple of other questions, and Yvain gave a correct and very clear answer to the final question.

Or so it seems to me.

You can (and should) be surprised that the device failed. You should not be surprised that you survived -- it's the only way you can feel anything at all.

You always survive.

Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:

P(W|F) = P(F|W)P(W)/P(F)Note that none of these probabilities are conditional on survival.Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.

An analogy I would give is:

You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue room. You are one of these people. You turn the lights on and see that you're one of the 10 people in a red room. Don't you immediately conclude that there are almost certainly only 10 people, with nobody in a blue room?

The red rooms represent Everett worlds where the LHC miraculously and repeatedly fails. The blue rooms represent Everett worlds where the LHC works. God's coin flip is whether or not the LHC is dangerous.

i.e. You conclude that there are no people in worlds where the LHC works (blue rooms), because they're all dead. The reasoning still works even if the coin is biased, as long as it's not too biased.

If you're conducting an experiment to test a hypothesis, the first thing you have to do is set up the apparatus. If you don't set up the apparatus so it produces data, you haven't tested anything. Just like if you try to take a urine sample, and the subject can't pee. The experiment has failed to produce data, not the same as the data failing to prove the hypothesis.

With respect for your diligent effort and argument, nonetheless: Fail.

F => S -!-> P(X|F) = P(X|F,S)

(Had your argument above been correct, the probabilities

wouldhave been the same.)Conditioning on survival, or more precisely, the (continued?) existence of "observers", is just what anthropic reasoning is all about. Hence the controversy about anthropic reasoning.

To understand the final question in the post, suppose that you hooked yourself up to a machine that would instantly and painlessly kill you if a quantum coin came up tails. After one hundred heads, wouldn't you start to believe in the Quantum Theory of Immortality? But if so, wouldn't you be tempted to use it to win the lottery? ...that's where the question comes from, anyway - never mind the question of what exactly is believed.

See also: Outcome Pump

I retract my endorsement of Simon's last comment. Simon writes that S == (F or not W). False: S ==> (F or not W), but the converse does not hold (because even if F or not W, we could all be killed by, e.g., a giant comet). Moreover, Simon writes that F ==> S. False (for the same reason). Finally, Simon writes, "Note that none of these probabilities are conditional on survival," and concludes from that that there are no selection effects. But the fact that a true equation does not contain any explicit reference to S does not mean that any of the propositions mentioned in the equation are independent or conditionally independent of S. In other words, we have established neither P(W|F) == P(W|F,S) nor P(F|W) == P(F|W,S) nor P(W) == P(W|S) nor P(F) == P(F|S), which makes me wonder how we can conclude the absence of an observational selection effect.

simon, that's right, of course. The reason I'm dragging branches into it is that for the (strong) anthropic principle to apply, we would need some kind of branching -- but in this case, the principle doesn't apply [unless you and I are both wrong], and the math works the same with or without branching.

Eliezer, huh? Surely if F => S, then F is the same event as (F /\ S). So P(X | F) = P(X | F, S). Unless P(X | F, S) means something different from P(X | F and S)?

Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it

doesn'tfollow that if you're a suriving observer, LHC has probably failed.Suppose that out of 1000 women who participate in routine screening, 10 have breast cancer. Suppose that out of 10 women who have breast cancer, 9 have positive mammographies. Suppose that out of 990 women who do not have breast cancer, 81 have a positive mammography.

If you do have breast cancer, getting a positive mammography isn't very surprising (90% probability). If you do

nothave breast cancer, getting a positive mammography isquitesurprising (less than 10% probability).But suppose that all you know is that you've got a positive mammography. Should you assume that you have breast cancer? Well, out of 90 women who get a positive mammography, 9 have breast cancer (10%). 81 do not have breast cancer (90%). So after getting a positive mammography, the probability that you have breast cancer is 10%...

...which is the same as before taking the test.

While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.

What we want to know is P(W|F,S)

As I pointed out F=> S so P(W|F,S) = P(W|F)

We can legitimately calculate P(W|F,S) in at least two ways:

1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way

2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works

there are also ways you can get it wrong, such as:

3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post

4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <- what other people are probably actually doing

In my first comment in this thread, I said it was a simple application of Bayes' rule (method 1) but then said that Eliezer's failure was not to apply the anthropic principle enough (ie I told him to update from method 4 to method 2). Sorry if anyone was confused by that or by subsequent posts where I did not make that clear.

Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.

Eliezer:

F => S -!-> P(X|F) = P(X|F,S)All right, give me an example.

And yeah, anthropic reasoning is all about conditioning on survival, but you have to do it consistently. Conditioning on survival in some terms but not others = fail.

Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification. The second is wrong: the probabilities without conditioning on S are "God's eye view" probabilities, and really are independent of selection effects.

Allan, oh ****, the elementary math in my previous comment is completely wrong. (In the scenario I gave, the probability that you have breast cancer is 1%, not 10%, before taking the test.) My argument doesn't even approximately work as given: if having breast cancer makes it more likely that you get a positive mammography, then *indeed* getting a positive mammography must make it more likely that you have breast cancer. Sorry!

(I'm still convinced that my argument re the LHC is correct, but I realize that I'm just looking stupid right now, so I'll just shut up for now :-))

Sorry Richard, well of course they aren't necessarily independent. I wasn't quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)

Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)

Okay, looking back at the original argument, and going back to definitions...

If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the observers will be concentrated into the non-dangerous universe. In other words, if you're going to start running the LHC, then, conditioning on your own survival, you are nearly certain to be in the non-dangerous universe. Then further conditioning on the long string of failures, you are equally likely to be in either universe. If you start out by conditioning on the long string of failures, then conditioning on your own survival indeed doesn't tell you anything more.

But under anthropic reasoning, the argument doesn't play out like this; the way anthropic reasoning works, particularly under the Quantum Suicide or Quantum Immortality versions, is something along the lines of, "You are never surprised by your own survival".

From the above, we can see that we need something like:

Initial probability of Danger: 50%

Initial probability of subjective Survival: 100%

Probability of Failure given Danger and Survival: 100%

Probability of Failure given ~Danger and Survival: 1%

Probability of Danger given Survival and Failure: ~1%

So to comment through Simon's logic vs. anthropic logic step by step:

still holds technically true

Still technically true; but once you condition on survival, as anthropics does in effect require, then P(Fail|Danger) is very high.

Here we depart from anthropic reasoning. As you might expect, quantum suicide says that P(Fail|Danger) != P(Fail). That's the whole point of raising the possibility of, "

giventhat the LHC might destroy the world,how unusualthat it seems to have failed 50 times in a row"...but as stated originally, conditioning on the existence of "observers" is what anthropics is all about. It's not that we're substituting, but just that all our calculations were conditioned on survival in the first place.

Eliezer, I used "=>" (intending logical implication), not ">=".

I would suggest you read my post above on this second page, and see if that changes your mind.

Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

Zis would seems to explains it.

(I use -> to indicate logical implication and => to indicate a step in a proof, or otherwise implication outside the formal system - I do understand this to be conventional.)

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)

This could only reflect uncertainty that anthropic reasoning was valid. If you were certain anthropic reasoning were valid (I'm sure not!) then you would make no such update. In practice, after surviving a few hundred rounds of quantum suicide, would further survivals really seem to call for alternative explanations?

After surviving a few hundred rounds of quantum suicide the next round will probably kill you.

Are you familiar with the story of the man who got the winning horse race picks in the mail the day before the race was run? Six times in a row his mysterious benefactor was right, even correctly calling a victory for a horse with forty-to-one odds. Now he gets an envelope in the mail from the same mysterious benefactor asking for $1,000 in exchange for the next week's picks. Are you saying he should take the deal and clean up?

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)You mean you use method 2. Except you don't, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom's version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally amongst the observers. Well, that approach is BS whenever the number of observers differs between possible universes, since if you imagine aliens in the universe but causally separate, the probabilities would depend on their existence.

Also, does it really make sense to you, intuitively, that you should get a different result given two actually existing universes compared to two possible universes?

This could only reflect uncertainty that anthropic reasoning was valid. If you were certain anthropic reasoning were valid (I'm sure not!) then you would make no such update. In practice, after surviving a few hundred rounds of quantum suicide, would further survivals really seem to call for alternative explanations?As I pointed out earlier, if there was even a tiny chance of the machine being broken in such a way as to appear to be working, that probability would dominate sooner or later.

One last thing: if you really believe that annihilational events are irrelevant, please do not produce any GAIs until you come to your senses.

Whoops, I didn't notice that you did specifically claim that P(W|S)=P(W).

Do you arrive at this incorrect claim via Bostrom's approach, or another one?

This is a subject I've long been meaning to give some thought too, but at the moment I'm pretty swamped - hope to get back to it when I have more time.

Simon, pretty much Bostrom's approach. Self-Sampling without Self-Indication. I know it's wrong but I don't have any better approach to take.

Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That's a very poor argument considering the severe problems you get without it.

I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don't see what makes that bullet so hard to bite either.

Wasn't one of the conclusions we arrived at in the quantum mechanics sequence that "observer" was a nonsense, mystical word?

I might add, for the benefit of others, that self-sampling forbids playing favourites among which observers to believe that you are in a single universe (beyond what is actually justified by the evidence available), and self-indication forbids the same across possible universes.

Nominull: It's a bad habit of some people to say that reality depends on, or is relative to observers in some way. But even though observers are not a special part of reality, we are observers and the data about the universe that we have is the experience of observers, not an outside view of the universe. So long as each universe has no more than one observer with your experience, you can take your experience as objective evidence that you live in a universe with one such observer instead of zero (and with this evidence to work with, you don't need to talk about observers). But it's difficult to avoid talking about observers when a universe might have multiple observers with the same subjective experience.

Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,

It is not a justifiable simplification. A satisfactory answer to the question you were trying to answer should remain satisfactory even if other existential risks (e.g., a giant comet) are high. If other existential risks were high, would you just throw up your hands and say that the question you were trying to answer is unanswerable?

Again, I think your contributions to this comment thread were better than anyone else's. I hope you continue to contribute here.

Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.I don't see how, unless you're told you could also be one of those people.

Benja:

Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a surviving observer, LHC has probably failed.I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.

Richard, obviously if F does not imply S due to other dangers, then one must use method 2:

P(W|F,S) = P(F|W,S)P(W|S)/P(F|S)

Let's do the math.

A comet is going to annihilate us with a probability of (1-x) (outside view) if the LHC would not destroy the Earth, but if the LHC would destroy the Earth, the probability is (1-y) (I put this change in so that it would actually have an effect on the final probability)

The LHC has an outside-view probability of failure of z, whether or not W is true

The universe has a prior probabilty w of being such that the LHC if it does not fail will annihilate us.

Then:

P(F|W,S) = 1

P(F|S) = (ywz+x(1-w)z)/(ywz+x(1-w)z+x(1-w)(1-z))

P(W|S) = (ywz)/(ywz+x(1-w)+x(1-w)(1-z))

so, P(W|F,S) = ywz/(ywz+x(1-w)z) = yw(yw+x(1-w))

I leave it as an exercise to the reader to show that there is no change in P(W|F,S) if the chance of the comet hitting depends on whether or not the LHC fails (only the relative probability of outcomes given failure matters).

Really though Richard, you should not have assumed in the first place that I was not capable of doing the math. In the future, don't expect me to bother with a demonstration.

Allan: you're right, I should have thought that through more carefully. It doesn't make your interpretation correct though...

I have really already spent much more time here today than I should have...

Err... I actually did the math a silly way, by writing out a table of elementary outcomes... not that that's silly itself, but it's silly to get input from the table to apply to Bayes' theorem instead of just reading off the answer. Not that it's incorrect of course.

And by elementary I mean the 8 different ways W, F, and the comet hit/non hit can turn out.

Allan:

I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.I was trying to restate in different terms the following argument for failure to be considered evidence:

For "observer" I substituted "surviving observer," because when doing the math I find it more helpful to consider all potential observers and then say that some of them are dead and thus can't observe anything. So my "surviving observer" is the same as your "observer," right?

So I read your argument as:

Ifthe LHC is benign, and you're a random (surviving) observer, then it's amazing if (i.e., there is a low probability that) you find yourself in one of the few worlds where the LHC keeps failing.Ifthe LHC is dangerous, and you're a random observer, then it's non-amazing (i.e., there is a high probability that) you find yourself in a world where the LHC keeps failing.Therefore, if you're a random observer, and you find yourself in a world where the LHC keeps failing, then the LHC is probably dangerous (because then, we don't need to assume something amazing going on). Am I misunderstanding something?If I understand you right, what I'm saying is that both the if's are clearly correct, but I believe that the 'therefore' doesn't follow.

To me, the problem is essentially the same as the following: You are one of 10,000 people who have been taken to a prison. Nobody has explained why. Every morning, the guards randomly select 9/10 of the remaining prisoners and take them away, without explanation. Among the prisoners, there are two theories: one faction thinks that the people taken away are set free. The other faction thinks that they are getting executed.

It is the fourth morning. You're still in prison. The nine other people who remained have just been taken away. Now, if the other people have been executed, then you are the only remaining observer, so if you're a random observer, it's not surprising that you should find yourself in prison. But if the other people have been set free, then they're still alive, so if you're a random observer, there is only a 1/10,000 chance that you are still in prison. Both of these statements are

correctif you are a random (surviving) observer. But it doesn't follow that you should conclude that the other people are getting shot, does it? (Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)Now, I get that you probably think something makes this line of reasoning not apply when we consider the anthropic principle (although I do think that you're wrong then :)). But my point is that, unless I'm missing something, the probabilistic reasoning is the same as in my restatement of your argument, so if the laws of probability don't make the conclusion follow in this scenario, they don't make the conclusion follow in your argument, either.

I should say that I don't reject "the" anthropic principle. I wholeheartedly embrace the version of it that I can derive from the kind of reasoning as above. For example: If our theory of evolution seems to suggest that there is one very improbable step in the evolution of intelligent life -- so improbable that it's not likely to have happened even a single time in the history of the universe -- should we then take that as a reason to conclude that something is wrong with our theory? If we are pretty sure that there is only a single universe, yes. If we have independent evidence that all possible Everett branches exist, no. (If something like mangled worlds is true, maybe -- but let's not get into that now...)

Why should we reject our theory in a single universe, but not if all Everett branches exist? Consider again the prison analogy. You observed how the guards chose the prisoners to take away, and it sure looked random. But now you are the only surviving prisoner. Should you conclude that the guards' selection process wasn't really random? There's no reason to: If the guards used a random process, one prisoner had to remain on the fourth day, and this may just as well have been you -- nothing surprising going on. This corresponds to the scenario where all possible Everett branches exist.

But suppose that you were the only prisoner to begin with (and you know this), and every morning the guards threw a ten-sided die which is marked "keep in prison" on one side and "take away" on the nine others -- and it came up "keep in prison" every morning. In this case, it seems to me that you

dohave a reason to start suspecting that the die is fixed (i.e., that your original theory, that the "keep in prison" outcome had only a 10% chance of happening, was wrong). This corresponds to the scenario where there is only a single universe.This is how I always understood the anthropic principle when reading about it, and

thisversion of it I embrace. Theotherversion I'm pretty sure is wrong.That said, if you have the energy to do so, please do keep arguing with me! :-) I don't really understand this "other anthropic principle," and I'm rejecting it simply because it disagrees with my calculations and I'm really pretty sure that I'm applying my probability theory right here. If I'm wrong, that will be humbling, but I would still rather know than not know, please :-)

My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being trueAlejandro has a good point.

Benja:

But it doesn't follow that you should conclude that the other people are getting shot, does it?I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.

(Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)It seems like it does. If people are getting shot then you're not able to observe any decision by the guards that results in you getting taken away. (Or at least, you don't get to observe it for long - I'm don't think the slight time lag matters much to the argument.)

I did a calculation here:

http://tinyurl.com/3rgjrl

and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).

...Allan, sorry for the delay in replying. Hopefully tomorrow. (In my defense, I've spent the whole day seriously thinking about the problem ;-))

OK, I've finally had a little time to go over these comments and I am now persuaded to take the position of simon and Benja Fallenstein. I'd already decided to be a Presumptuous Philosopher and accept self-indication, and this just supports that further.

An excellently clear way of putting it!

*bites bullet*

I suspect that anthropics is easy to solve if you think in terms of cognitive decision theory.

Okay, after reading several of Nick Bostrom's papers and mulling about the problem for a while, I think I may have sorted out my position enough to say something interesting about it. But now I'm finding myself suffering from a case of writer's block in explaining it, so I'll try to pull a small-scale Eliezer and say it in a couple of hiccups, rather than one fell swoop :-)

I

havebeen significantly wrong at least twice in this thread, the first time when I thought everybody was reasoning from the same definitions as me, but getting their math wrong, and the second time when I said I held my view because I was "pretty sure I [was] applying my probability theory right". I had an intuition and a formal argument, but then I found that the two disagree in some edge cases, and I decided to retain the intuition, so my formal argument wasnotthe solid rock I thought it was. All of which is a long-winded way of saying, it's about time that I concede that I maystillbe wrong about this, and if so, please do help me figure it out...We all seem to agree that the issue depends on whether we accept self-indication, and that self-indication is equivalent to being a thirder in the Sleeping Beauty problem. When I first learned about this problem from Robin's post, I was

veryconvinced that the halfer view was right -- to the tune of having been willing to bet money on it -- for about fifteen minutes. Then I thought about something like the following variation of it:I cannot conceive of a reason not to assign the probability 1/4 to each of these propositions, and in my opinion, when Beauty sees the light flash red, she must update her subjective probability in the obvious way (or the notion of subjective probability no longer makes much sense to me). Then, of course, after seeing the light flash blue, Beauty's probability that the coin fell heads is 1/3.

Short of assigning special ontological status to being consciously awake, I don't see a way to distinguish between the original Sleeping Beauty and my variation after the light flashes blue, so I'm a thirder now. My new view is that observing the random variable (color=blue) can change my probability in non-mysterious ways, so observing the random variable (awake=yes) can, too.

In his paper on the problem, Nick argues for a "solution" that would apply to my version, too. He would reject my view of how Beauty must update her probabilities if she sees a blue light. His argument goes something like this:

What I really need to consider is all of Beauty's observer-moments in all possible worlds; Beauty has a prior over these moments, considers the evidence she has for which moment she is in, and does a Bayesian update. The moment when Beauty wakes up is different from the moment when the light flashes, so she needs to consider at least

eightpossible moments: (h1-) heads, Monday, she wakes up; (h1+) heads, Monday, the light flashes; and so on. Nothing in the axioms of probability theory requires the probability of (h1+) to be related in any way to the probability of (h1-)! In fact, Nick would argue, we should simply assign probabilities like this:p(xx- | h1- \/ h2- \/ t1- \/ t2-) = 1/4 (for xx in {h1,h2,t1,t2})

p(h1+ | h1+ \/ t1+ \/ t2+) = 1/2

p(xx+ | h1+ \/ t1+ \/ t2+) = 1/4 (for xx in {t1,t2})

I agree that this is formally consistent with the axioms of probability, but in order for Beauty to be rational, in my opinion she must still update her probability estimate in the "normal" way when the light flashes blue. Nick's approach strikes me as saying, "I'm a completely new observer-moment now, why should I care about my probability estimates a minute ago?" If our formalism allows us to do that, I think our formalism isn't strong enough. In this case, I'd require that

p(xx- | h1- \/ h2- \/ t1- \/ t2-)

= p(xx+ | h1+ \/ h2+ \/ t1+ \/ t2+)

--i.e.,

beforeconditioning on the actual colors she sees, Beauty's probability estimates when the light flashes must be the same as when she wakes up. I don't know how well this generalizes, but if we accept it in this case, it blocks Nick's proposal.Anybody here who finds Nick's solution intuitively right?

It may be silly to continue this here, since I'm not sure anybody's still reading, but at least I'm writing it down at all this way, so... here's "Nick's Sleeping Beauty can be Dutch Booked" (by Nick's own rules)

In his Sleeping Beauty paper, Nick considers the ordinary version of the problem: Beauty is awakened on Monday. An hour later, she is told that it is Monday. Then she is given an amnesia drug and put to sleep. A coin is flipped. If the coin comes up tails, she is awakened again on Tuesday (and can't tell the difference to Monday). Otherwise, she sleeps through to Wednesday.

Nick distinguishes five possible observer-moments: Beauty wakes up on Monday (h1 and t1, depending on heads/tails); Beauty is told that it's Monday (h1m and t1m); Beauty wakes up on Tuesday (t2). Let P-(x) := P(x | h1 \/ t1 \/ t2), and P+(x) := P(x | h1m \/ t1m).

There are two possible worlds, heads-world (h1,h1m) and tails-world (t1,t1m,t2). Within each of the groups (h1,t1,t2) and (h1m,t1m), Nick assigns equal probabilities to each observer-moment in a given possible world. This gives:

P-(h1) = 1/2; P-(t1) = 1/4; P-(t2) = 1/4

P+(h1m) = 1/2; P+(t1m) = 1/2

In his paper, Nick considers the following Dutch book, suggested by a referee (I'm quoting from the paper):

Nick dismisses this argument because if the coin falls tails, Beauty will accept the

firstbettwice, once on Monday and once on Tuesday. Now, on Tuesday no money changes hands, so what's the difference? Well, Nick thinks it's very interesting that it could make a difference, but clearly it does, you see, because otherwise Sleeping Beauty could be Dutch booked if she accepts his probability assignments!Instead of trying to argue that it makes no difference, let me just exhibit a variation where Beauty only accepts every bet at most once in every possible world.

Before Beauty is put to sleep, we throw a second fair coin, labelled A and B. If it comes up A, then on Monday, we tell Beauty, "It's day A!" And if we wake her up on Tuesday, we tell her, "It's day B!" If the coin comes up B, Monday is B, and Tuesday is A.

We now have doubled the number of worlds and observer-moments. The worlds are HA, HB, TA, and TB, each with probability 1/4; the observer-moments are ha1, ha1m; hb1, hb1m; ta1, ta1m, ta2; tb1, tb1m, tb2. P- and P+ are defined analogously to before, and again, we assign equal probability to each of the awakenings in every possible world (and make them sum to the probability of that world). This gives:

P-(ha1) = P-(hb1) = 1/4

P-(ta1) = P-(ta2) = P-(tb1) = P-(tb2) = 1/8

P+(ha1m) = P+(hb1m) = P+(ta1m) = P+(tb1m) = 1/4

The sets of observer-moments that Beauty cannot distinguish are: {ha1,ta1,tb2}; {hb1,tb1,ta2}; {ha1m,ta1m}; {hb1m,tb1m}. (E.g., on {ha1,ta1,tb2}, Beauty just knows that she's been awakened and that it's "Day A." In world B, Tuesday is Day A, thus tb2 is in this set.)

Note well that in none of these sets, there is more than one observer-moment from the same possible world. I exhibit the following variation of the above Dutch Book.

Beauty now loses $5 if the day-label-coin comes up A, and breaks even if it comes up B. Every bet is accepted exactly once in every possible world in which it is offered at all. We could add symmetrical additional bets to make sure that Beauty also loses money in B worlds, but I think I've made my point. Nick can create his priors over observer-moments without violating the axioms of probability, but if it worries him if Beauty can be Dutch-booked in the way he discusses in his paper, I do believe he needs to be worried...

So if I think that (something like) the Self-Indication Assumption is correct, what about Nick's standard thought experiment in which the silly philosopher thinks she can derive the size of the cosmos from the fact she's alive?

Well, the experiment does worry me, but I'd like to note that self-sampling without self-indication produces, in fact, a very similar result (if the reference class is all conscious observers, which Nick's version of the experiment seem to assume). I give you The Presumptuous Philosopher and the Case of the Twin Stars:

If you accept this thought experiment (which requires only self-sampling) but reject a variation where T1 is ruled out because it predicts that cosmological death rays will make life impossible in all galaxies but one in a trillion (which requires self-sampling), then I think you've allowed yourself to be suckered into implicitly assuming that conscious observation is something ontologically fundamental. Though I accept that you may not be convinced of this yet :-)

(Side note: Lest you be biased against the philosopher just because she dares to apply probability theory, do also consider the case where T1 predicts that Mars had a chance of 4/5 per year of flying out of the solar system since it came into existence -- and beat those odds by random chance every single time. Of course, in that case, the physicists would already be convinced that her reasoning is sound, to the tune that they would already have applied it itself.)

In my previous comment, I mentioned my worry that accepting observer self-sampling without self-indication means that you've been suckered into taking conscious observation as an ontological primitive. (Also, I've been careful not to use examples that involve the size of the cosmos.) I would like to suggest that instead of a prior over observer-moments in possible worlds, we start with a prior over

space-time-Everett locationsin possible worlds. If all possible worlds we consider have the same set of space-time-Everett locations, and we have a prior P0 over possible worlds, then I suggest that we adopt the prior over (world, location) pairs:(Actually, that's not necessarily quite right: If the "amplitude as degree of reality" interpretation is true, Everett branches should of course be weighted in the obvious way.)

As with observer-moments, we then condition on all the evidence we have about our actual space-time-Everett location in our actual possible world, and call the result our "subjective probability" distribution.

Isn't anthropic reasoning about taking into account the observer selection effects related to the fact that we are conscious observers? Sure, but it seems to me that any

non-mysterious anthropic reasoning is taken care of just fine by the conditioning step. Any possible worlds, Everett branches and cosmic regions that don't support intelligent life will automatically be ruled out, for example.The above definition trivially implies the following weak principle of self-indication:

This principle is enough to support being a thirder in the Sleeping Beauty problem, for example (which was what originally suggested it to me, when I was wondering what prior Beauty should update when she observes herself to be awake).

So what if we are uncertain about the size of the universe (so that its size depends on which possible world we are in)? Then we are faced with the same question as before: Should we treat finding ourselves in bigger universes as more probable

a priori, or not?Formally, the question we face is, if we have a prior P0 over possible worlds, what should our prior over (possible world, space-time-Everett location) pairs be?

(As before, we may want to weigh Everett branches in the obvious way.) Both of these definitions give us the weak principle of self-indication (defined in the previous comment), since they agree with the previous comment's definition when all possible worlds contain the same number of locations. So they both support thirding in Sleeping Beauty.

But which of the definitions should we adopt? Note that sampling

withoutself-indication has the property that P(w) = P0(w), i.e.,beforewe condition on any evidence (including the fact that we are conscious observers), the probability of finding ourselves in world w is exactly the probability of that world, according to P0. On the face of it, this sounds exactly like what wemeanby having a prior P0 over the possible worlds.I think we may mean different things with P0 depending on how we arrive at P0, though. But for the moment, let me note that while the principle of weak self-indication forces me to accept the presumptuous philosopher's position in both the Case of the Twin Stars and the Case of the Death Rays, I may still have a good reason to reject the conclusion that the cosmos is infinite with probability one.

Unfortunately, physical self-sampling

withoutself-indication has odd consequences of its own. Consider the following thought experiment:She calculates as follows. P0(T1) = P0(T2) = 1/2. According to T2, the universe contains a trillion more space-time locations than according to T1. But according to

boththeories, the universe contains onlyonelocation consistent with our evidence. According to the definition given in the previous comment, this makes T2 much less likely that T1.Intuitively, the argument is, "According to T2, there are a trillion more places we could have found ourselves at (at most of which we would not have been conscious observers, but taking that into account would be supernatural wonder tissue). So having found ourselves at this particular place is much more surprising according to T2."

But this argument doesn't sound very convincing to me. From where do we get this supposed lottery over space-time locations? At least, the argument sounds much less intuitively convincing than the following: "Our uncertainty is mathematical, and our observations would be exactly the same according to each theory -- we can't conclude anything about the mathematical result from the fact that one would destroy the universe, while the other would only leave it barren."

In the next comment, I'll develop that intuition into a more formal argument supporting self-indication.

Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.

OK, my previous comment was too rude. I won't do it again, OK?

Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.

I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking out of existence to enduring pain.

My objection to this talk of destroying the universe in response to a terrorism incident, etc, is that the people whose terminal values are served by that outcome (such as, I am assuming, you) share the universe with people whose terminal values assign a negative value to that outcome (such as me). By using this method of increasing your utility you impose severe negative utility on me.

Note that if you engage in ordinary quantum suicide then my circumstances remain materially the same in both Everett branches, and the objection I just described does not apply.

Richard,

I am going to assume ... that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable)I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where

allof their future Everett branches wink out of existence. So if somebody prefers one Everett branch winking out and one continuing to exist to both continuing to exist, you can only describe their utility function by looking at all the branches, not by looking at the different branches individually. (Did that make sense?)Gawk! "even if they had to deal with a terrorist attack on all of these branches, say" was supposed to come

after"Surely everybody here would find an outcome undesirable whereallof their future Everett branches wink out of existence." (The bane of computers. On a typewriter, this would not have happened.)Yes, and I can see why you would rather say it that way.

My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.

If it fails 100 times in a row, i`ll sue the researchers for killing me a hundred times in all those other realities.

Oh the humanity-ity-ity-ty-ty-y-y-y-y!

Of course the future repeated failures of the LHC have got to seem non-miraculous though since the likelhood of each experiment failing becomes lower the more experiments you plan on running.

Perhaps some sort of funding problem after a collapse of the world financial system, but that`s not likely, is it?

It`s like the idea applying the idea of quantum immortality and the anthropic principle to my own experience. Wouldn`t it make sense for me to observe my apparent immortality in a world where immortality wasn`t miraculous, such as when technology had advanced to a point where it was `normal`.

A bit of a contradiction there, technology advances to the point where destruction of humanity is easy, but immortality is possible as well.

The Anti-LHC Conspiracy strikes again! LHC might not go online until 2010.

Right, that's it, I'm gonna start cooking up some nitroglycerin and book my Eurostar ticket tonight. Who's with me?

I dread to think of the proportion of my selves that have already suffered horrible gravitational death.

*4 points [-]Holger Nielsen sides with this idea.

Playing with quantum suicide?

"Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs."Am I misunderstanding, missing a joke, or did the overwhelming majority here consider the probability that the LHC could destroy the world non-negligible? After reading this article, I wound up looking up articles on collider safety just to make sure I wasn't crazy. My understanding of physics told me that all the talk of LHC-related doomsday scenarios was just some sort of science fiction meme. I was under the impression that artificial black holes would take levels of energy comparable to the big bang, and a micro black hole would be pretty low risk even then. (Reading the wikipedia article further, I see that FHI was involved in the raising of concerns over the LHC, which is the closest thing to an explanation for this discussion I've found so far.)

I'm actually kinda concerned about this, since if the discussion on this page is taking LHC risk seriously, then either I or LW had serious problems modeling reality. This wouldn't be in the category of "weird local culture"; cryonics involves a lot of unknowns and most LWers notice this, and UFAI actually makes much more sense as existential risk, since an unfriendly transhuman intelligence would actually be dangerous... but there were plenty of knowns that could be used to predict the LHC's risk, and they all pointed toward the risk being infinitecimal.

If, on the other hand, this was some bit of humor playing on pop-sci memes, used to play with the anthropic principal and quantum suicide, then oops.

*5 points [-]The question "how many LHC failures is too many?" is the question "how negligible was your prior on the LHC being dangerous, really?" Is it low enough to ignore 10 failures? 100? 1000? Do you have enough confidence in your understanding of physics to defy the data that many times?

Ok. Somehow it came across as taking the idea of LHC risk more seriously than is rational. I'm not sure why it didn't feel hypothetical enough (I should have been tipped off when Eliezer didn't mention the obvious part where the LHC would lose funding if the failures became too numerous. I'd consider 1000 LHC failures indicative that my model of how scientists get funding is broken before the LHC actually being a doomsday weapon.).

Not both?

The idea is that the risk is infinitesimal but you want to put an approximate number on that using a method of imaginary updates - how much imaginary evidence would it take to change your mind?

That makes sense. I made a similar misinterpretation on a different post around the same time I read this one, so putting the two together makes me pretty confident I was not thinking at my best yesterday. (Either that, or my best is worse than I usually believe.)