I don't understand this at all. It's the most complicated scientific experiment humans have ever built, of course it's going to have some problems. If it fails 100 times then that is probably because it was too ambitious of a project and the engineers failed, not because of the anthropic principle. Smart physicists don't think it will cause any sort of risk to humanity, but EY seems to mistrust physicists. Why is this?
Somewhat related: why did EY bet that there would be no Higgs in the first place? Of course in hindsight it's clear that there was a Higgs (pretty clear at least) and that the LHC won't end the world, so I'll rephrase the question. What evidence did EY have that led him to believe that there would be no Higgs?
(Sorry if this post is silly, I'm very confused about this matter)
If I understand his remarks (about the Higgs boson bet) correctly, he did not trust the ability of physicists to make that sort of prediction from theory. It is possible that he could have updated beforehand by reading up on the history of the neutrino, positron, charm quark, and neutrino mass discoveries.
I'm still confused: Is the lack of existence of a working Finely Tuned Wobbliebit Machine (FTW Machine) evidence that the FTW Machine destroys the universe? How many times does my FTW Machine need to break down before I should update my estimate of the odds that it will destroy the universe?
Is the lack of existence of a working Finely Tuned Wobbliebit Machine (FTW Machine) evidence that the FTW Machine destroys the universe?
Assuming that you accept certain versions of the anthropic principle, and if you would have expected someone to have figured out a way to build the Finely Tuned Wobbliebit Machine for quite some time now, and mysterious accidents kept happening to people on the verge of figuring out how to build the FTW machine, then yes.
EDIT It probably also helps the plausibility if there are people who quite strongly believe that the FTW machine would destroy the world, though only if you think such people are likely to have arrived at those beliefs correctly. Which isn't the (then) current situation.
The third one isn't strictly required, but you would probably see it if this version of the anthropic principle were at work, and so if you didn't see this, it would be strong counter-evidence.
The second assumption is required because when you're doing anthropic reasoning, you are essentially performing a bayesian update in response to the evidence that you are alive and performing a bayesian update. If this is more likely given some hypothesis than another. And P(I am alive| The FTW machine destroys the universe) ~= P(I am alive | The FTW machine does not destroy the universe), until the point at which the FTW machine would be built. (This was briefly alluded to in the post, with the line "Does this mean we can foresee executing a future probability update, but can't go ahead and update now?")
The FTW Machine wasn't built last week, because I had a cold. It wasn't built the week before, because... &etc.
This means that if I continue to procrastinate for more and more unlikely reasons, then I have evidence that cleaning off my desk destroys the universe?
This means that if I continue to procrastinate for more and more unlikely reasons, then I have evidence that cleaning off my desk destroys the universe?
Well, yes but.
Because really, you continuing to procrastinate for more and more unlikely (stated) reasons is also evidence that superintelligent dolphins living on Venus are employing mind control to prevent you from cleaning off your desk, in order to further their nefarious schemes against the wild fleets of caribou on Mercury. This hypothesis does increase in probability, but there's a hypothesis with a much higher prior probability (you're just lazy) which will always dominate it. It counts as evidence, but on its own, it's never going to become a hypothesis likely enough to warrant serious consideration.
Going back to the original point: Why do breakdowns of the LHC provide evidence that the LHC destroys the universe? The 'destroys the universe' hypothesis gains no ground on the 'sabotage' hypothesis nor the 'engineers building something for the first time ever make errors' hypothesis.
The only thing it does is falsify the 'Everything goes off without a hitch' hypothesis, and makes the set of 'everything goes off with n or fewer hitches' hypotheses less likely.
Unless the breakdowns are such that they would be unlikely under those hypotheses, they don't gain ground. I do seem to recall hearing something once about a sandwich getting dropped into a piece of the apparatus, which screwed things up for a while. That may not have actually happened, but that's the sort of event that would (under certain versions of the anthropic principle) provide evidence that the LHC destroys the world.
ADDITION: It would also be strong evidence if the LHC suffered these sorts of breakdowns when we attempted certain types of collisions, but not others.
Regarding your edit: I certainly agree that assertions that are quite strongly believed by people who are likely to have arrived at those beliefs correctly tend to be more plausible than assertions that aren't. But that has nothing to do with whether any given observed fact is evidence of such a belief.
You're correct. I was thinking in terms of "what evidence would be required for me to conclude that the FTW machine destroys the universe, and was visualizing other people coming to that conclusion for theoretical reasons as assisting in locating that hypothesis.
I nominate this for one of the weakest posts ever, and not because the LHC has been operating normally for some time now (if not at full luminosity). It's weak because it privileges a hypothesis, specifically, the Everettian reasons over many more likely ones for a sequence of failures this complex (and easy to sabotage) machinery might have suffered.
First of all, this has nothing to do with Everett interpretation, and failures of LHC are evidence of its successful start causing end of the world in the same sense as a coin toss resulting in "heads" is evidence that "tails" would kill you. (If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up "heads", this starts looking like a compelling argument to stop tossing the coin; maybe "tails" triggers a gun.)
Privileging of a hypothesis means assigning it a probability that is too high. The post was actually responding to people who were privileging that hypothesis after just one failure, and it considered the quantitative nature of such probability judgments: is there a number of failures such that it constitutes sufficient evidence for this hypothesis to become plausible? How many failures are too many? At the point where you do have enough evidence, the hypothesis is no longer unfairly privileged, it's pushed up by the strength of evidence, distinguished from alternative explanations.
The anthropic effect can be distinguished from too-complicated-machinery or sabotage reasons when people have worked sufficiently on resolving the technical and securilty difficulties. Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded, all the while having a very clear theoretical understanding of how it works, and maybe succeeding in running certain experiments, but with the machinery always failing if they decided to run certain other kind of experiments? This would be the kind of miracle where "complex machinery" no longer works as a feasible explanation, while anthropic principle seems to fit.
We didn't observe an impossible number of LHC failures, so the hypothesis didn't become more probable, but the general idea (which has nothing to do with LHC) is interesting.
First of all, this has nothing to do with Everett interpretation
If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up "heads", this starts looking like a compelling argument to stop tossing the coin; maybe "tails" triggers a gun.
First, how do you reconcile your second statement with the first one? I must be missing something. Second, if anthropics save us from ourselves via quantum immortality, that's a good reason to be less careful, not stop tossing the coin.
Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded
I'm sure there are plenty of examples of problems which ended up being much harder than they appeared, but were eventually solved (Fermat's last theorem? Human flight?) or will be eventually solved (fusion energy? Machine vision? You name some), all are arguments for the anthropic principle... until they are no longer. They all have specific reasons for failures, unrelated to anthropics. I would keep looking for those reasons and ignore the athropics altogether as an unproductive hypothesis.
how do you reconcile your second statement with the first one?
What Everett interpretation gives you is some sense of "actuality" of hypotheticals, but when thinking about possible futures you don't need all (any!) of the hypothetical possibilities to be "actual". Not knowing which possibility obtains would result in the same line of argument as when you assume that they all obtain.
Assuming you are being killed if a coin falls up "tails", only the hypotheticals where all coins fall "heads" will contain you observing the coin toss (so you reason before starting the experiment), so if you do observe that, the hypothesis of "tails" being lethal stands (as you observe before starting the experiment). If on the other hand it's not lethal, then your observing the coins is possible with other tossing outcomes, so your observing of something else falsifies the lethal-tails hypothesis.
if anthropics save us from ourselves via quantum immortality, that's a good reason to be less careful, not stop tossing the coin.
There is no saving, the probability of survival is getting reduced. It would be very unlikely to survive a million coin tosses if "tails" are lethal (so you reason before tossing the first coin), but even if you do happen to survive that long, you would still risk being killed with each subsequent coin toss, so you shouldn't keep doing that if a million "heads" would happen to be your observation (you decide in advance).
I'm not sure why you expect to see LHC failures in the past instead of e.g. either a failure to attain sufficient level of technological development to build LHC, or a vacuum fluctuation preventing destruction of the world. If you wish, a fluctuation which looks just like Higgs.
It'd be trivial to reformulate laws of physics so that anyone who doesn't observe some interaction dies of vacuum decay.
edit: Also, if you adjust probability of theories based on improbability of your existence given a theory, using Bayes theorem, this anthropic consideration cancels out.
Today's post, How Many LHC Failures Is Too Many? was originally published on 20 September 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Say It Loud, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.