"But if it comes up heads 100 times, it's taking you too long to notice"
Ros. Heads. (He puts it in his bag. The process is repeated.) Heads. (Again.) Heads. (Again.) Heads. (Again.) Guil. (Flipping a coin) There is an art to the building of suspense. Ros. Heads. Guild. (Flipping another) Though it can be done by luck alone. Ros. Heads. Guil. If that's the word I'm after. Ros. (Raises his head) 76! (Guil gets up but has nowhere to go. He spins the coin over shoulder without looking at it.) Heads Guil. A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability. (He flips a coin back over his shoulder.) Ros. Heads. Guil. (Musing) The law of probability, it has been asserted, is something to do with the proposition that if six monkeys - (He has surprised himself) if six monkeys were. . . Ros. Game? Guil. Were they? Ros. Are you?
-- Rosenkrantz & Guildenstern Are Dead, Tom Stoppard, Act I
Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?
John Cramer wrote a novel with an anthropic explanation for the cancellation of the SSC:
http://www.amazon.com/Einsteins-Bridge-John-Cramer/dp/0380788314
Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?
It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.
But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explan...
Another thought. Suppose a functioning LHC does in fact produce world-destroying scenarios. Would we see: A) an LHC with mechanical failures? or B) an LHC where all collisions happen except world-destroying ones? If B, would the LHC be giving us biased experimental results?
I'm confused by your last comment - what use would the LHC be in a global economic crisis or nuclear war? I don't suppose you mean something like "rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to observe the scenario in which the market does recover" or something like that, do you?
IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.
Say our prior odds for the LHC being a destroyer of worlds are a billion to one against. Then this hypothesis is at negative ninety decibels. Conditioned on the hypothesis being true, the probability of observing failure is near unity, because in the modal worlds where the world really is destroyed, we don't get to make an observation--or we won't get to remember it very long. Say that conditioned on the hypothesis being false, the probability of observing failure is one-fifth--this is very delicate equipment, yes? So each observation of failure gives us 10log(1/0.2), or about seven decibels of evidence for the hypothesis. We need ninety decibels of evidence to bring us to even odds; ninety divided by seven is about 12.86. So under these assumptions it takes thirteen failures before we believe that the LHC is a planet-killer.
First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.
But if I'm not mistaken, even old failures will become evidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.
As much as the AP fascinates me, it does my head in. :)
Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.
You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.
At the risk of asking the obvious:
Does the fact that no one has yet succeeded in constructing transhuman AI imply that doing so would necessarily wipe out humanity?
Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.
Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.
On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.
On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.
On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.
Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terror...
Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.
(just read the book, it's well worth it)
Blowing up the world in response to terrorist attack is like shooting yourself in the head when someone steps on your foot, to make subjective probability of your feet being stepped on lower.
Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
I can only see this statement making any sense if you think we should behave as if nature first randomly picked a value of a global cross-world time p...
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature
Uh, isn't it actually nonsense? The anthropic principle is supposed to explain how you got lucky enough to exist at all, not how you got lucky enough to keep existing.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.
Maybe it's stupid and evil, but what stops it from actually working?
"How many times does a coin have to come up heads before you believe the coin is fixed?"
I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.
I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.
Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.
As others have noted, it seems straightforward to use Bayes' rule to decide when to believe how much that LHC malfunctions were selection effects - the key question is the prior. As to the last question, even if I was confident I lived in an infinite universe and so there was always some version of me that lived somewhere, I still wouldn't want to kill off most versions of me. So all else equal I'd never want to fire the LHC if I believed doing so killed that version of me.
Brilliant post.
I almost want it to fail a few more times so that the press latch on to this idea. Imagine journalists trying to A) understand and b) articulate the anthropic principle across many worlds. Would be hilarious.
Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.
This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes' rule.
Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher un...
To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.
If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.
Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.
My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true, so after many mechanical failures I would rather believe the first hypothesis than the second one.
Simon: the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.
But - if the LHC was Earth-fatal - the probability of observing a world in which the LHC was brought fully online would be zero.
(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)
Allan, I am of course aware of that (actually, it would probably take time, but even if the annihilation were instantaneous the argument would not be affected).
There are 4 possibilities:
The fact that conditional on survival possibility 2 must not have happened has no effect on the relative probabilities of possibility 1 and possibility 3.
But the destruction of the earth in case of creation of blackhole or a stranglet will not be instantaneous, like on YouTube movie.
BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it. The main hurting thing of BH will be its energy realize. And if BH is in the cenre of the earth , this energy will go out as violent volcanic eruptions.
Because of exponential grouth of BH the bigest part of energey will be realized in the last years of its exist...
"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"
After observing 100 failures in a row I would expect that a failure would occur after the next attempt to switch it on too. So it doesn't seem as a reliable means to prevent terrorism or economic crash even if anthropic multi-world "ideology" were true.
On the other hand, if somebody were able to show that the amplitude of LHC's unexpected failure for technical reasons was significantly lower than the amplitude of terrorist-free future...
IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.Incorrect reasoning; every branching compatible with sentient organisms contains sentient organisms monitoring its conditions.
The organisms that are in branchings in which LHC facilities were built perceive themselves to be in such a world, no matter how improbable it is. It doesn't matter if it's quite unlikely for you to win a lottery -- if you do win a lottery, you'll eventually accumulate enough data to conclude that's precisely what's happened.
BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it.
I am curious about these assumptions. BH with mass of the whole Earth has the Schwartzschild radius about 1cm. At start the BH should be much lighter, so it's not clear to me how could this BH, sitting in the centre of Earth, eat anything.
simon,
Actually, I think it might (though I'm obviously open to correction) if you take the anthropic principle as a given (which I do not).
One thing you're missing is that there are two events here, call them A and B:
A. LHC would destroy earth B. LHC works
So the events, which are NOT independent, should look more like:
Outcome 2 is "closer" to outcom...
Robinson, I could try to nitpick all the things wrong with your post, but it's probably better to try to guess at what is leading your intuition (and the intuition of others) astray.
Here's what I think you think:
I'm with Brian Jaress, who said, 'I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?"' OTOH, I have a very poor head for probabilities, Bayesian or otherwise, and in fact the Monty Hall thing still makes my brain hurt. So really, I make a lousy "me too" here.
That said: Could someone explain why repeated mechanical failures of the LHC should in any way imply the likelihood of it destroying the world, thus invoking the anthropic principle? Given the crowd, I'm assuming there's more to it than "OMG technology is scary and it doesn't even work right!", but I'm not seeing it.
Okay, it scares me when I realize that I've been getting probability theory wrong, even though I seemed to be on perfectly firm ground. But I'm finding that it's even more scary that even our hosts and most commenters here seem to be getting it backwards -- at least Robin; given that the last question in the post seems so obviously wrong for the reasons pointed out already, I'm starting to wonder whether the post is meant as a test of reasoning about probabilities, leading up to a post about how Nature Does Not Grade You On A Curve (grumble :)). Thanks to ...
The intuition behind the math: If the LHC would not destroy the world, then on date X, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures, and most Everett branches have the LHC happily chugging ahead. If the LHC would destroy the world, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures -- and most Everett branches have Earth munched up into a black hole.
The very small number of Everett branches that have the LHC non-working due to a string ...
I'm going to try another explanation that I hope isn't too redundant with Benja's.
Consider the events
W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)
We want to know P(W|F) or P(W|F,S), so let's apply Bayes.
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
Bayes:
P(W|F) = P(F|W)P(W)/P(F)
Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would...
Benja: Good explanation! Intuitively, it seems to me that your argument holds if there are Tegmark IV branches with different physical laws, but not if whether the LHC would destroy Earth is fixed across the entire multiverse. (Only in the latter case, if it would destroy the Earth, the objective frequency of observations of failure - among observations, period - would be 1.)
Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:
The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]
I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.
Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due ...
Simon's last comment is well said, and I agree with everything in it. Good job, Simon and Benja.
Although the trickiest question was answered by Simon and Benja, Eliezer asked a couple of other questions, and Yvain gave a correct and very clear answer to the final question.
Or so it seems to me.
Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".You can (and should) be surprised that the device failed. You should not be surprised that you survived -- it's the only way you can feel anything at all.
You always survive.
Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:
P(W|F) = P(F|W)P(W)/P(F)
Note that none of these probabilities are conditional on survival.
Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.
An analogy I would give is:
You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue roo...
If you're conducting an experiment to test a hypothesis, the first thing you have to do is set up the apparatus. If you don't set up the apparatus so it produces data, you haven't tested anything. Just like if you try to take a urine sample, and the subject can't pee. The experiment has failed to produce data, not the same as the data failing to prove the hypothesis.
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
With respect for your diligent effort and argument, nonetheless: Fail.
F => S -!-> P(X|F) = P(X|F,S)
In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.
(Had your argument above been correct, the probabilities would have been the same.)
Conditioning on survival, or more precisely, the (continued?) existence of "observers&quo...
I retract my endorsement of Simon's last comment. Simon writes that S == (F or not W). False: S ==> (F or not W), but the converse does not hold (because even if F or not W, we could all be killed by, e.g., a giant comet). Moreover, Simon writes that F ==> S. False (for the same reason). Finally, Simon writes, "Note that none of these probabilities are conditional on survival," and concludes from that that there are no selection effects. But the fact that a true equation does not contain any explicit reference to S does not mean that ...
simon, that's right, of course. The reason I'm dragging branches into it is that for the (strong) anthropic principle to apply, we would need some kind of branching -- but in this case, the principle doesn't apply [unless you and I are both wrong], and the math works the same with or without branching.
Eliezer, huh? Surely if F => S, then F is the same event as (F /\ S). So P(X | F) = P(X | F, S). Unless P(X | F, S) means something different from P(X | F and S)?
Allan, you are right that if the LHC would destroy the world, and you're a surviving observer,...
While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.
What we want to know is P(W|F,S)
As I pointed out F=> S so P(W|F,S) = P(W|F)
We can legitimately calculate P(W|F,S) in at least two ways:
1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way
2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works
there are also ways you can get it wrong, such as:
3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post
4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <...
Allan, oh **, the elementary math in my previous comment is completely wrong. (In the scenario I gave, the probability that you have breast cancer is 1%, not 10%, before taking the test.) My argument doesn't even approximately work as given: if having breast cancer makes it more likely that you get a positive mammography, then indeed getting a positive mammography must make it more likely that you have breast cancer. Sorry!
(I'm still convinced that my argument re the LHC is correct, but I realize that I'm just looking stupid right now, so I'll just shut up for now :-))
Sorry Richard, well of course they aren't necessarily independent. I wasn't quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)
Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)
Okay, looking back at the original argument, and going back to definitions...
If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the obs...
Eliezer, I used "=>" (intending logical implication), not ">=".
I would suggest you read my post above on this second page, and see if that changes your mind.
Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Eliezer, I used "=>" (intending logical implication), not ">=".
Zis would seems to explains it.
(I use -> to indicate logical implication and => to indicate a step in a proof, or otherwise implication outside the formal system - I do understand this to be conventional.)
I would suggest you read my post above on this second page, and see if that changes your mind.
Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on p...
After surviving a few hundred rounds of quantum suicide the next round will probably kill you.
Are you familiar with the story of the man who got the winning horse race picks in the mail the day before the race was run? Six times in a row his mysterious benefactor was right, even correctly calling a victory for a horse with forty-to-one odds. Now he gets an envelope in the mail from the same mysterious benefactor asking for $1,000 in exchange for the next week's picks. Are you saying he should take the deal and clean up?
Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)
You mean you use method 2. Except you don't, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom's version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally ...
Whoops, I didn't notice that you did specifically claim that P(W|S)=P(W).
Do you arrive at this incorrect claim via Bostrom's approach, or another one?
This is a subject I've long been meaning to give some thought too, but at the moment I'm pretty swamped - hope to get back to it when I have more time.
Simon, pretty much Bostrom's approach. Self-Sampling without Self-Indication. I know it's wrong but I don't have any better approach to take.
Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That's a very poor argument considering the severe problems you get without it.
I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don't see what makes that bullet so hard to bite either.
Wasn't one of the conclusions we arrived at in the quantum mechanics sequence that "observer" was a nonsense, mystical word?
I might add, for the benefit of others, that self-sampling forbids playing favourites among which observers to believe that you are in a single universe (beyond what is actually justified by the evidence available), and self-indication forbids the same across possible universes.
Nominull: It's a bad habit of some people to say that reality depends on, or is relative to observers in some way. But even though observers are not a special part of reality, we are observers and the data about the universe that we have is the experience of observers, not an outsid...
in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,
Richard: your first criticism has too...
Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.
I don't see how, unless you're told you could also be one of those people.
Benja: Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a surviving observer, LHC has probably failed.
I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.
Richard, obviously if F does not imply S due to other dangers, then one must use method 2:
P(W|F,S) = P(F|W,S)P(W|S)/P(F|S)
Let's do the math.
A comet is going to annihilate us with a probability of (1-x) (outside view) if the LHC would not destroy the Earth, but if the LHC would destroy the Earth, the probability is (1-y) (I put this change in so that it would actually have an effect on the final probability)
The LHC has an outside-view probability of failure of z, whether or not W is true
The universe has a prior probabilty w of being such that the LHC if i...
Err... I actually did the math a silly way, by writing out a table of elementary outcomes... not that that's silly itself, but it's silly to get input from the table to apply to Bayes' theorem instead of just reading off the answer. Not that it's incorrect of course.
Allan: I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.
I was trying to restate in different terms the following argument for failure to be considered evidence:
The intuition on my side is that, if you consider yourself a random observer, it's amazing that you should find yourself in one of the extremely few worlds where the LHC keeps failing, unless the LHC is dangerous, in which case all observers are in such a world....
My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true
Alejandro has a good point.
Benja: But it doesn't follow that you should conclude that the other people are getting shot, does it?
I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.
(Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)
It seems like it does. If people are getting shot then you're not able to observe any decision by the guards that results in you getting taken away. (Or at least, you don't get to observe it for long - I'm don't think the slight time lag matters much to the argument.)
I did a calculation here:
http://tinyurl.com/3rgjrl
and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).
...Allan, sorry for the delay in replying. Hopefully tomorrow. (In my defense, I've spent the whole day seriously thinking about the problem ;-))
OK, I've finally had a little time to go over these comments and I am now persuaded to take the position of simon and Benja Fallenstein. I'd already decided to be a Presumptuous Philosopher and accept self-indication, and this just supports that further.
To me, the problem is essentially the same as the following: You are one of 10,000 people who have been taken to a prison. Nobody has explained why. Every morning, the guards randomly select 9/10 of the remaining prisoners and take them away, without explanation. Among the prisoners, there are two theories: one faction thinks that the people taken away are set free. The other faction thinks that they are getting executed....It is the fourth morning. You're still in prison. The nine other people who remained have just been taken away. Now, if the other peopl
Okay, after reading several of Nick Bostrom's papers and mulling about the problem for a while, I think I may have sorted out my position enough to say something interesting about it. But now I'm finding myself suffering from a case of writer's block in explaining it, so I'll try to pull a small-scale Eliezer and say it in a couple of hiccups, rather than one fell swoop :-)
I have been significantly wrong at least twice in this thread, the first time when I thought everybody was reasoning from the same definitions as me, but getting their math wrong, and th...
It may be silly to continue this here, since I'm not sure anybody's still reading, but at least I'm writing it down at all this way, so... here's "Nick's Sleeping Beauty can be Dutch Booked" (by Nick's own rules)
In his Sleeping Beauty paper, Nick considers the ordinary version of the problem: Beauty is awakened on Monday. An hour later, she is told that it is Monday. Then she is given an amnesia drug and put to sleep. A coin is flipped. If the coin comes up tails, she is awakened again on Tuesday (and can't tell the difference to Monday). Otherwi...
So if I think that (something like) the Self-Indication Assumption is correct, what about Nick's standard thought experiment in which the silly philosopher thinks she can derive the size of the cosmos from the fact she's alive?
Well, the experiment does worry me, but I'd like to note that self-sampling without self-indication produces, in fact, a very similar result (if the reference class is all conscious observers, which Nick's version of the experiment seem to assume). I give you The Presumptuous Philosopher and the Case of the Twin Stars:
Physicists ha...
In my previous comment, I mentioned my worry that accepting observer self-sampling without self-indication means that you've been suckered into taking conscious observation as an ontological primitive. (Also, I've been careful not to use examples that involve the size of the cosmos.) I would like to suggest that instead of a prior over observer-moments in possible worlds, we start with a prior over space-time-Everett locations in possible worlds. If all possible worlds we consider have the same set of space-time-Everett locations, and we have a prior P0 ov...
So what if we are uncertain about the size of the universe (so that its size depends on which possible world we are in)? Then we are faced with the same question as before: Should we treat finding ourselves in bigger universes as more probable a priori, or not?
Formally, the question we face is, if we have a prior P0 over possible worlds, what should our prior over (possible world, space-time-Everett location) pairs be?
Physical self-sampling without self-indication. P((w,x)) = P0(w) / number of possible locations in world w...Physical self-sampling with p
Unfortunately, physical self-sampling without self-indication has odd consequences of its own. Consider the following thought experiment:
Physicists have conclusively figured out what the theory of everything is. We know roughly how the cosmos will behave until a trillion years into the future. However, it's still unclear what will happen at this point: either (T1) the universe will end, or (T2) the universe will continue for another trillion trillion years, but be unable to support intelligent life. A hard mathematical calculation can show which of these...
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry.
----
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
----
The real question, Eliezer, is how many times the LHC would have to fail before you decide to fundamentally change the direction of your research? At some point the most profitable avenue ...
Pardon me, my question skipped far too many inferential steps for me to be comfortable that my meaning is clear. Allow me to query for the underlying premises more clearly:
* Is quantum-destroying-the-entire-universe suicide different to plain quantum-I-killed-myself-in-a-box suicide?
That is to say, does Eliezer consider it rational to optimise for the absolute tally of Everett branches or the percentage of them? In "The Bottom Line" Elizier gives an example definition of my effectiveness as a rationalist as how well my decision optimizes the per...
At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.
Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.
Am I to assume then, Richard, that you consider the destruction of a branch entirely using whatever mechanism the LHC was supposedly going to destroy the fabric of reality is exactly equivalent to a more mundane death in a box? Or, did you simply use your cached thought regarding quantum suicide and saw a chance to be rude? I've got a hunch that it's the latter since the implication doesn't logically follow.
Dull, I was hoping something more useful to tell me. The implications of whatever the LHC could supposedly do and in particular why ever someone would ...
OK, my previous comment was too rude. I won't do it again, OK?
Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.
I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking...
Richard, I am going to assume ... that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable)
I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of e...
Gawk! "even if they had to deal with a terrorist attack on all of these branches, say" was supposed to come after "Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence." (The bane of computers. On a typewriter, this would not have happened.)
Did that make sense?
Yes, and I can see why you would rather say it that way.
My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.
Richard, Cameron Taylor has still not advocated quantum suicide. That straw man is already dead.
I assign quantum suicide a utility of "(utility(death) + utility(alternative))/ 2 - time wasted - risk of accidently killing yourself while making the death machine". That is to say, I think it is bloody stupid.
What I do assert is that anyone answering 'yes' to Elizier's proposal to destroy the universe with an LHC to avert terrorism would also be expected to use the same mechanism to achieve any other goal for which the utility is lower than the cost ...
I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence. So if somebody prefers one Everett branch winking out and one continuing to exist to both continuing to exist, you can only describe their utility funct...
If it fails 100 times in a row, i`ll sue the researchers for killing me a hundred times in all those other realities.
Oh the humanity-ity-ity-ty-ty-y-y-y-y!
Of course the future repeated failures of the LHC have got to seem non-miraculous though since the likelhood of each experiment failing becomes lower the more experiments you plan on running.
Perhaps some sort of funding problem after a collapse of the world financial system, but that`s not likely, is it?
Its like the idea applying the idea of quantum immortality and the anthropic principle to my own experience. Wouldn
t it make sense for me to observe my apparent immortality in a world where immortality wasnt miraculous, such as when technology had advance
...
Benja: Wrong analogy. You left out a bit. All people who actually HAVE CANCER AND that would get a POSITIVE RESULT are killed during the mammograph, never to receive the result. Your task is then to condition on first receiving a result and then that result being positive and alter your estimate of how likely you are to have cancer.
(Depending on how you meant the analogy it may be the negative result + positive actual cancer who are killed. Point is, your analogy completely misses the point. Not every person who takes the test gets a result but you do. That is important.)
Pardon me, ignore that or delete it. I clicked "How Many LHC Failures Is Too Many?" rather than "James" on the recent posts link. Death.
Right, that's it, I'm gonna start cooking up some nitroglycerin and book my Eurostar ticket tonight. Who's with me?
I dread to think of the proportion of my selves that have already suffered horrible gravitational death.
Holger Nielsen sides with this idea.
Playing with quantum suicide?
"Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs."
Am I misunderstanding, missing a joke, or did the overwhelming majority here consider the probability that the LHC could destroy the world non-negligible? After reading this article, I wound up looking up articles on collider safety just to make sure I wasn't crazy. My understanding of physics told me that all the talk of LHC-related doomsday scenarios was just some sort of science fiction meme. I was under the impression that artificial black holes would take levels of energy comparable to the big bang, and a micro black hole would be pretty low risk even...
Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?