From an actual physicist:
Chang Kee Jung, a neutrino physicist at Stony Brook University in New York, says he'd wager that the result is the product of a systematic error. "I wouldn't bet my wife and kids because they'd get mad," he says. "But I'd bet my house."
I'll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on - that there will be no propagation of any cause-and-effect relation at faster than lightspeed.
My real probability is lower, but I think that anyone who'd bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I'm offering.
I will not accept more than $20,000 total of such bets.
I'll take that bet, for a single pound on my part against 99 from Eliezer.
(explanation: I have a 98-2 bet with my father against the superluminal information propagation being true, so this sets up a nice little arbitrage).
Actually, what is the worst that could happen? It's not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.
The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it's a win-win.
Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, "MESS WITH TIME."
It's not about transmitting information into the past - it's about the locality of causality. Consider Judea Pearl's classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of "causality" is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS - the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don't believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around,...
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I'm not trying to do that, just read off the style of the existing model). But anyway, I'd pay $20,000 to find out I'm that wrong - what I want to eliminate is the possibility of paying $20,000 to find out I'm right.
People in this thread with physics backgrounds should say so so that I can update in your direction.
When I looked at the paper, my impression is that it was a persistent result in the experiment, which would explain publication: the experiment's results will be public and someone, eventually, will notice this in the data. Better that CERN officially notice this in the data than Random High Energy Physicist. People relying on CERN's move to publish may want to update to account for this fact.
Lets say you're a physicist maximizing utility. It's pretty embarrassing to publish results with mistakes in them and the more important the results the more embarrassing it would be to announce results later shown to be the product of some kind of incompetence. So one can usually expect published results of serious import to have been checked over and over for errors.
But the calculus changes when we introduce the incentive of discovering something before anyone else. This is particularly the case when the discovery is likely to lead to a Nobel prize. In this case a physicist might be less diligent about checking the work in order to make sure she is the first out with the new results.
Now in this case CERN-OPERA is pretty much the only game in town. No one else can measure this many neutrinos with this kind of accuracy. So it would seem like they could take all the time they needed to check all the possible sources of error. But if Hyena is right that OPERA's data is/was shortly going to be public then they risk someone outside CERN-OPERA noticing the deviation from expected delay and publishing the results. By itself that is pretty embarrassing and it introduces some controversy ...
Relevant: The Beauty of Settled Science
I'm waiting for another experiment before I get too worked up about this result.
That MINOS saw something like this before is pretty interesting. Other thing to consider is SN1987A-- at the rate the CERN neutrinos were traveling we should have detected neutrinos of SN1987A four years before it was visible.
The fact that this was made public like this suggests they are very confident they haven't made any obvious errors.
This paper discusses the possibility of neutrino time travel.
There is a press conference at 10 AM EST.
I'll say 0.9 non-trivial experimental set-up error (no new physics but nothing silly either). 0.005 something incompetent or fraudulent. Remainder is new physics "something I don't know about, "neutrinos sometimes travel backwards in time" and "special relativity is wrong" 8000:800:1.
Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone?
Does that work? Once you beat light don't you just win the speed race? The in-principle upper bound on what can be influenced just disappears. The rest is just engineering. Trivial little details of how to manufacture a device that emits a finely controlled output of neutrinos purely by shooting other neutrinos at something.
I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn't a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn't being produced at much faster than the speed of light but only a tiny bit over.
I'm going to now proceed to list some of the 5%. I don't know enough to discuss their likelyhood in detail.
1) Neutrinos oscillating into a ...
Ok. I think there's one thing that should be stated explicitly in this thread that may not have been getting enough attention (and which in my own comments I probably should have been more explicit.)
The options are not "CERN screwed up" and "neutrinos can move faster than c." I'm not sure about the actual probabilities but P(neutrinos can move faster than c|CERN didn't screw up) is probably a lot less than P(Weird new physics that doesn't require faster than light particles|CERN didn't screw up).
My probability distribution of explanations:
Having read the preprint, about the only observation is that I think you’re overestimating the fraud hypothesis.
There’s almost a whole page of authors, the preprint describes only the measurement, and finishes with something like (paraphrasing) “we’re pretty sure of seeing the effect, but given the consequences of this being new physics we think more checking is needed, and since we’re stumped trying to find other sources of error, we publish this to give others a try too; we deliberately don’t discuss any possible theoretical implications.”
At the very least, this is the work of the aggregate group trying very hard to “do it right”; I guess there could still be one rogue data manipulator, but I would give much less than 1 in 20 that nobody else in the group noticed anything funny.
The comparison to parapsychology is a really poor one in this case-- for what should be pretty obvious reasons. For example, we know there is no file drawer effect. What we know about neutrino speed so far comes from a)Supernova measurements which contradict these results but measured much lower energy neutrinos and b)direct measurements that didn't have the sample size or the timing accuracy to reveal the anomaly OPERA discovered.
But more importantly this was a six sigma deviation from theoretical prediction. As far as I know, that is unheard of in parapsychology.
We cannot treat physics the way we treat psychology.
Relevant updates:
John Costella has a fairly simple statistical analysis which strongly suggests that the the OPERA data is statistically significant (pdf). This of course doesn't rule out systematic problems with the experiment which still seem to be the most likely.
Costella has also proposed possible explanations of the data. See 1 and 2. These proposals focus on the idea of a short-lived tachyon. This sort of explanation helps explain the SN 1987a data. Costella points out that if the muon-neutrino pair is becoming tachyonic through the initial hadron ba...
More relevant papers:
"Neutrinos Must Be Tachyons" (1997)
Abstract: The negative mass squared problem of the recent neutrino experiments from the five major institutions prompts us to speculate that, after all, neutrinos may be tachyons. There are number of reasons to believe that this could be the case. Stationary neutrinos have not been detected. There is no evidence of right handed neutrinos which are most likely to be observed if neutrinos can be stationary. They have the unusual property of the mass oscillation between flavors which has not be...
The neutrinos are not going faster than light. P = 1-10^-8
Error caused by some novel physical effect: P = 0.15
Human error accounts for the effect (i.e. no new physics): P= 0.85
This isn't even worth talking about unless you know a serious amount about the precise details of the experiment.
EDIT: Serious updating on the papers Jack links to downthread. I hadn't realised that neutrinos have never been observed going slower than light. P = no clue whatsoever.
There's now a theoretical paper up on the arxiv discussing a lot of these issues . The authors are respected physics people it seems. I have neither the time nor the expertise to evaluate it, but they seem to be claiming a resolution between the OPERA data and the SN 1987A data.
The best short form critique of this announcement I have seen is the post by theoretical physicist Matthew Buckley on the metafilter website:
After I read that comment I clicked through to his personal website and I found a nifty layman's explanation of the necessity for Dark Matter in current cosmo theoy:
Matt's web essay on dark matter.
If you don't have time to read his comment, what he says is that the results are not obviously bogus but they are so far-fetched that almost no physicists will find their daily work affected by the provisional...
Sean Carroll has made a second blog post on the topic, to explain why faster-than-light neutrinos do not necessarily imply time travel.
...The usual argument that faster than light implies the ability to travel on a closed loop assumes Lorentz invariance; but if we discover a true FTL particle, your first guess should be that Lorentz invariance is broken. (Not your only possible guess, but a reasonable one.) Consider, for example, the existence of a heretofore unobserved fluid pervading the universe with a well-defined rest frame, that neutrinos interact wit
To quote one of my professors, from the AP release:
...Drew Baden, chairman of the physics department at the University of Maryland, said it is far more likely that there are measurement errors or some kind of fluke. Tracking neutrinos is very difficult, he said.
"This is ridiculous what they're putting out," Baden said, calling it the equivalent of claiming that a flying carpet is invented only to find out later that there was an error in the experiment somewhere. "Until this is verified by another group, it's flying carpets. It's cool, but ..
Forgive my ignorance, but... if distance is defined in terms of the time it takes light to traverse it, what's the difference between "moving from A to B faster than the speed of light" and "moving from B to A"?
Username, you're having a small conversion experience here, going from "causality is local" to "wavefunction collapse is preposterous" to "I understand quantum suicide" to "I'd better sign up for cryonics once I graduate" in rapid succession. It's a shame we can't freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn't have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don't jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the "next moment"; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let's say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the "points" themselves are whole configurations. "Locality" here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we're talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from - if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects - arbitrarily large configurations - and amplitude changes can come from very different "directions" in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a "region of space", which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a "configuration" is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions - treat them as reality, rather than as constructs akin to probability distributions - because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we'll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you "combine" the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be "entanglement information" on how to assemble them as part of a wavefunction for configurations.
I don't know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn't even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one "definite event" and the next. But the attempt to go to the opposite interpretive pole, and say "OK, the wavefunction IS reality" is not a simple answer to your philosophical problems either; instead, it's the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer's argument, which purports to derive the existence of a causal speed-limit from a postulate of "locality": my critique is as informal and inexact as his argument, but perhaps I've at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he's passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn't anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective "locality" (in Eliezer's sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn't sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of "c" from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don't think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I'm not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman's thinking about bal...
http://www.nature.com/news/2011/110922/full/news.2011.554.html
http://arxiv.org/abs/1109.4897v1
http://usersguidetotheuniverse.com/?p=2169
http://news.ycombinator.com/item?id=3027056
Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone? I'd be curious to see your probability estimates for whether this theory pans out. Or other crackpot hypotheses to explain the results.