New high-precision tests carried out by the OPERA collaboration in Italy broadly confirm its claim, made in September, to have detected neutrinos travelling at faster than the speed of light. The collaboration today submitted its results to a journal, but some members continue to insist that further checks are needed before the result can be considered sound.

Link: nextbigfuture.com/2011/11/faster-than-light-neutrinos-opera.html

The OPERA Collaboration sent to the Cornell Arxiv an updated version of their preprint today, where they summarize the results of their analysis, expanded with additional statistical tests, and including the check performed with 20 additional neutrino interactions they collected in the last few weeks. These few extra timing measurements crucially allow the ruling out of some potential unaccounted sources of systematic uncertainty, notably ones connected to the knowledge of the proton spill time distribution.

[...]

So what does OPERA find ? Their main result, based on the 15,233 neutrino interactions collected in three years of data taking, is unchanged from the September result. The most interesting part of the new publication is instead that the  find that the 20 new neutrino events (where neutrino speeds are individually measured, as opposed to the combined measurement done with the three-year data published in September) confirm the earlier result: the arrival times appear to occur about 60 nanoseconds before they are expected.

Link: science20.com/quantum_diaries_survivor/opera_confirms_neutrinos_travel_faster_light-84763

Paper: kruel.co/paper-neutrino-velocity-JHEP.pdf

Previously on LW: lesswrong.com/lw/7rc/particles_break_lightspeed_limit/

New Comment
63 comments, sorted by Click to highlight new comments since: Today at 4:36 PM

From the Science 20 link:

It is necessary here to note that since distance from source to detector and time offsets necessary to determine the travel time of neutrinos have not been remeasured, the related systematics (estimated as well as -possibly- underestimated ones) are unchanged. The measurement therefore is only a "partial" confirmation of the earlier result: it is consistent with it, but could be just as wrong as the other.

Yeah, that's probably where they screwed up. (Bet $10, even odds?)

I take the bet. So if it's determined that an error in distance measurement from source to detector is the source of the problem, I lose, and if another explanation is found (deliberate fraud, not correcting for relativistic effects, some guys show up from the future and explain a new unified quantum string gravity theory that explains all), I win.

€10, even odds?

an error in distance measurement from source to detector is the source of the problem

That's narrower than what I'm offering. For example, the Science 20 article points out a possible error in the refraction index of the Gran Sasso light guide. Your side of the bet (if you still want to take it) could be rephrased as "In whichever explanation is found, they had the source-detector distance and all time offsets right (or knowingly lied about them)".

Also I reserve the right to renegotiate if the exchange rate goes weird before either of us hears the explanation.

I defy this data too!

Really? I'm starting to think maybe they might be correct.

[-][anonymous]12y140

I was thinking about this earlier today, having read/posted about the original findings and the first papers of "Here are all the ways that the original findings might be incorrect.", and then this announcement this morning.

For simplicity, let's assume someone's initial probabilities are:

There is a 1% chance they were correct.

There is an 11% chance they made a Type A error.

There is an 11% chance they made a Type B error.

There is an 11% chance they made a Type C error.

There is an 11% chance they made a Type D error.

There is an 11% chance they made a Type E error.

There is an 11% chance they made a Type F error.

There is an 11% chance they made a Type G error.

There is an 11% chance they made a Type H error.

There is an 11% chance they made a Type I error.

This paper establishes that there is essentially a negligible chance they made a Type A error. But there are still 8 ways they could be wrong. If I'm doing the calculations correctly, this means they now have about a 1.12% chance of being correct from the perspective of that someone, because these findings also mean it is more likely that they made a Type B-I error as well.

This is a gross simplification of the problem. There are other certainly other possibilities than "Correct" and "9 equally likely types of error." But I think this helped me put the size of the task of proving the neutrinos did go faster than light into a better perspective.

I don't think this is an accurate assessment. One of the most obvious error forms was a statistical error (since they were using long neutrino pulses and then using careful statistics to get their average arrival time). That is eliminated in this experiment. Another possible error was that detection of a neutrino could interfere with the chances of detecting other neutrinos which could distort the actual v. observed average (this is a known problem with some muon detector designs). This seemed unlikely, but is also eliminated by the short pulses. They also used this as an opportunity to deal with some other timing issues. Overall, a lot of possible error sources have now been dealt with.

[-][anonymous]12y50

My understanding of the chances is that in the situation above, the chances are still very low until they deal with almost all of the chances of error, and are still low even then.

For instance, dealing with 4 of 9 different sources of error, the calculations I did gave a chance of them being correct is around 1.78%. If they deal with 8 of the 9 different sources of error, they're still only around an 8.33% chance of being correct. (Assuming I calculated correctly as well.)

Also, I do want to clarify/reiterate that I wasn't trying for that much accuracy. 9 sources of equally likely error are a gross simplification, and I'm not a physicist. I didn't even count individual explanations of error to get to 9 so that assumption was itself probably influenced by heuristic/bias. (most likely that (11*9)+1=100.) It was more of a rough guess because I jumped to conclusions in previous threads and wanted to try to think about it at least a little bit more in this one before establishing an initial position.

All that being said, I'm definitely glad they can address multiple possible sources of error at once. If they do it correctly, that should greatly speed up the turn around time to finding out more about this.

Are you willing to make a 1:89 bet that they will eventually be proven incorrect?

[-][anonymous]12y00

Are you allowed to make bets with Karma upvotes? For instance, is it reasonable to propose "You upvote me once right now. If they confirm that Neutrino's are traveling faster then the speed of light, You remove the upvote you gave me and I will upvote you 89 times."

On the one hand, that sounds like an abuse of the karma system. But on the other hand, it also sounds somehow more fun/appropriate than a money bet, and I feel if you manage to sucessfully predict FTL this far out you deserve 89 upvotes anyway.

Can other people weigh in on whether this is a good/bad idea?

This definitely sounds like an abuse of the karma system. With this, people could reach high karma levels just by betting, as even if they're wrong there's no downside to this bet.

They should include downvotes in the bet, so that every possible outcome is zero-sum.

It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for "person worth paying attention to" as making good posts.

I think we should just have a separate prediction market if for some reason we'd rather not use predictionbook.

The last time I looked at prediction book the allowed values were integers 0 - 100 which makes it impossible to really use it for this. Here the meaningful values are is it .00001 or is it .0000000001?

I liked this fellow's take.

Miley Cyrus is claiming > 1%, so your objection to PB does not apply. MC might like to distinguish between 1.1% and 1.0%, but this is minor.

If you're recording claims, not betting at odds, then rounding to zero is not a big deal. No one is going to make a million predictions at 1 in a million odds. One can enter it as 0 on PB and add a comment of precise probability. It is plausible that people want to make thousands of predictions at 1 in 1000, but this is an unimportant complaint until lots of people are making thousands of predictions at percent granularity.

An advantage of PB over bilateral bets is that it encourages people to state their true probabilities and avoid the zero-sum game of setting odds. A well-populated market does this, too.

This is (minus the specific numbers, of course, but you too were using them as examples) exactly how I see it.

The most likely error - that of wrong baseline - has not been addressed, so I don't have noticeably improved credence. This is a very small update.

Really? I'm starting to think maybe they might be correct.

The reason for why I posted this is because I was interested in the reactions it would evoke. It seems that many people here think that any information whatsoever is valuable and that one should update on that evidence.

It is very interesting to see how many people here are very skeptical of those results, even though they are based on comparatively hard evidence and they were signed by 180 top-notch scientists. Many of the same people give higher estimates, while demanding none or little evidence, for something that sounds as simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.

I think many people here have updated their belief, I did. My initial prior was very low, though, so I still retain a very low probability for FTL neutrinos.

simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.

Natural language isn't a great metric in this context. Also, recursive self-improvement doesn't in any obvious way require changing our understanding of the laws of physics.

...recursive self-improvement doesn't in any obvious way require changing our understanding of the laws of physics.

Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn't true for FTL phenomena because many people are aware of how unlikely that possibility is.

The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.

Some people think that complexity issues are even more fundamental than the laws of physics.

Sure. And I'm probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don't see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can't even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.

The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.

Well, but that's not an unreasonable position. If I don't have strong evidence either way on a question I should move my estimates close to 50%, That's in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.

SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they're generated. See here for more. Note that I, like the author of the post I've linked, do not believe that this finding will stand up. It's just that if it does stand up, it will be because the constant velocity assumption is wrong.

If I don't have strong evidence either way on a question I should move my estimates close to 50%...

That would be more than enough to devote a big chunk of the world's resources on friendly AI research, given the associated utility. But you can't just make up completely unfounded conjectures, then claim that we don't have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.

Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn't fall that way at all.

On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don't think it is very likely that we'd see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.

s/deny/defy

If everywhere in physics where we say "the speed of light" we instead say "the cosmic speed limit", and from this experiment we determine that the cosmic speed limit is slightly higher than the speed of light, does that really change physics all that much?

We have measured both to higher accuracies than the deviation here. One way to measure the "cosmic speed limit" is by measuring how things like energy transform when you approach that speed limit, for example, which happens in particle accelerators all day every day.

I'm aware that we've caculated 'c' both by directly measuring the speed of light (to high precision), as well as indirectly via various formulas from relativity (we've directly measured time dilation, for instance, which lets you estimate c), but are the indirect measurements really accurate to parts per million?

Fortunately for me, wikipedia turned out to provide good citations. In 2007 some clever people managed to measure the c in time dilation to a precision of about one part in 10^-8.

Very good sir!

Then what would be constraining the travel speed of light in a vacuum?

It is possible that a photon's energy/ mass is doing so. I believe this was discused in detail in the previous post.

Dark matter ? But as explained by Manfred above, we have estimates of "c the speed of light" and of "c the fundamental constant of GR/QM" that match and don't match with the OPERA team... so I'm just noticing I'm confused for now. And waiting for further tests (like Fermilab or Japanese team who said they'll try to reproduce it).

I count a "confirmation" by the same group working with the same equipment as only a small amount of evidence. I will have to see confirmation by a separate experiment (perhaps something involving NuMI at Fermilab) before giving this result much more credibility.

Edit to add: MINOS is planning to make a measurement, although currently they are still preparing the detector. They saw a similar result, but at a much lower confidence level, several years ago.

I remembered a claim of how the measurements of a number of physical constants is subject to anchoring where a previous result lead researchers to look for error at level of scrutiny and correct value is only slowly converged upon. Perhaps this is something similar, where a high profile display make researchers to look for that kind of result.

I'm wondering if in a few decades physical theories won't be talking about how photons move at "99.9% of the speed of neutrinos" (I'm not very knowledgeable about the underlying physical theories and especially about which parts are very solidly established ... from what I understand, our estimate for the speed of light comes from more than just timing photons, and also from some theory, but the epistemology of that is harder to tell for a relative layman like me).

One of the postulates of the (special) theory of relativity is that all laws of nature have the same form in all inertial frames. Electrodynamics predicts a fixed speed for electromagnetic waves, so light must have this speed in all inertial frames. Fixing the speed of light in all inertial frames while maintaining the relativity postulate requires that inertial frames are related by Lorentz transformations. The Lorentz transformations ensure that there can be only one invariant speed, and they also ensure there is no inertial frame moving faster than the speed of light. They do not mathematically rule out the possibility of objects moving faster than the speed of light, but the existence of such objects would have all sorts of weird consequences (instantaneous action at a distance, backwards causation). The Lorentz transformations do ensure that any object traveling faster than the speed of light will have the speed of light as a lower limit, so if neutrinos genuinely travel faster than speed of light and special relativity is approximately true, we should never observe neutrinos traveling slower than light. Unfortunately, we have.

Should we hope that the result is correct? (Please ignore the rationality problems caused by hoping for the universe to be structured in some preferred way.) Obviously, finding faster than light neutrinos makes the world more interesting but would it also mean that the laws of physics are more friendly to "beneficial technology" than we had previously believed? If you, and you alone, somehow knew that the result was correct and within a year the world would recognize this what if any stocks would you buy based on your private information?

If one is willing to be purely mercenary one would invest in the sort of publishers who will soon be selling books about 'Neutrino Astrology' in the same way they currently sell ones about 'quantum healing.'

If you, and you alone, somehow knew that the result was correct and within a year the world would recognize this what if any stocks would you buy based on your private information?

Unfortunately, this research is so preliminary that there's no obvious applications that are remotely plausible even under this assumption. Detecting neutrinos is really tough. This means that one a) is going to have a lot of trouble using this to send information faster than the speed of light with any substantial bandwith (and even then you would be getting only a small fractional improvement) b) in order to violate causality to solve computational problems you generally need to be able to send back in time a number of bits that is roughly linear with the length of your problem. It is remotely plausible that similar tricks can be done with fewer bits but if so, very little has been worked out that does that sort of thing in any useful fashion and the math looks tricky. And even if that does work, the infrastructure may be so large that it might not be worth it compared to just building large computers.

So the stocks that would make the most sense are simply companies that have anything to do with pure neutrino research because that is more likely to get more funding. So I'd look at what companies made the equipment for Gran Sasso, Kamiokande and IceCube and maybe buy stock in them. However, I strongly suspect that any company which is publicly traded and involved in these detectors is probably large enough that building components for neutrino research is very likely only a small segment of what they do. So even this would not be that helpful.

Maybe it would just make more sense to invest in tech stocks in general. While one is not getting much of a speed up on each roundtrip, multiple roundtrips, doing a little calculation on each trip, sounds like a major engineering task and not a fundamental task. And if that works, then we get the usual speedup - NP to P or whatever the precise complexity class conversion is, which sounds very economically valuable and whomever maintained the FTL computer would be able to extract much of the surplus.

If neutrinos are going back in time (which is only one possible explanation) then you might be able to create some pretty fancy things like Paradox Buttons that cannot be pressed and so won't be and so you can make "wishes" that the universe must satisfy... or the universe can stop you from making the button in the first place. There's a Yudkowsky video about this which he calls the Grandma Extraction Problem. Starts at the 19th minute.

Please ignore the rationality problems caused by hoping for the universe to be structured in some preferred way.

What wrong with hoping for the structure of the universe to be one way rather than another? I do it all the time.

What wrong with hoping for the structure of the universe to be one way rather than another? I do it all the time.

See the Litany of Tarski.

Martin Perl (who is now in his 80s!) just started blogging, and his first post is about the OPERA neutrinos. His analysis: more experiments, especially MINOS and T2K will help a lot more than armchair speculation at this point.

I suppose that will make all of us here less happy to hear that since such talk is obviously fun. But he's probably right

Do you know similarly extraordinary claims made by respected institutions from the past? How often did they turn out to be right, how often did they turn out to be wrong?

I wonder what renormalization theory has to say about relativity being only an artifact of charges and photons. That'd let charge-free particles (neutrinos, gluons, and Z [*]) exceed c. I'd suspect that it produces a different result - if only that the removal of gauge invariance should produce velocity-dependence in nuclear stability. If the OPERA result ends up being verified with all conceivable checks made (we're not close to this yet), we should definitely pop some heavy ions in a storage ring and see whether their decay rate changes (after taking relativity into account).

And of course it would require explaining the distant supernova data showing simultaneous arrival of neutrinos and light. Well, both of the models of tachyons that I've seen say that their energy peaks at speeds approximately C. If the distant supernova neutrinos are highly energetic, that would explain it... the low population of low energy photons would be indistinguishable from background, having been smeared out so far in advance of the arrival of most of them that connecting them to the event would not be a natural inference.

Alternately, there could be some sort of electroweak drag which brings them down to approximately light-speed before they accumulate a lead of more than, say, 60 ns over photons emerging from the same event. Maybe the OPERA neutrinos were initially moving much faster than c, but slowed down to c before accumulating more than a few meters' lead. Again, if this holds up, it'd be worth building a neutrino detector closer to the source (or a new source closer to the neutrino detector), perhaps much closer, to see if the speed is constant or whether the neutrinos are decelerating.

[*] but not neutrons, which contain charges, and moreover have magnetic dipoles, and can have electrical dipoles, and always have a quadrupole

And of course it would require explaining the distant supernova data showing simultaneous arrival of neutrinos and light. Well, both of the models of tachyons that I've seen say that their energy peaks at speeds approximately C. If the distant supernova neutrinos are highly energetic, that would explain it... the low population of low energy photons would be indistinguishable from background, having been smeared out so far in advance of the arrival of most of them that connecting them to the event would not be a natural inference.

The OPERA neutrinos are on the order of GeVs but the SN 1987A neutrinos were on the order of 1-10 MeV. So that doesn't work.

Oops. There goes the classic tachyon model.

I wonder what renormalization theory has to say about relativity being only an artifact of charges and photons.

Any reason to suggest this hypothesis? (And by relativity you mean Lorentz invariance?)

1) if neutrinos can beat lightspeed, relativity could simply not apply to them, rather than applying in some weird twisted way.

2) Neutrinos are nearly perfectly exempt from photon interactions (the simplest diagram is to pop out a virtual W and electron).

3) This could be the reason for 1.

4) How, then, do other charged particles get all relativized? Well, photons get entangled with just about everything just by virtue of most things having electrical charge. That's what the renormalization theory is about - including all of the recursing interaction effects that a particle can't get rid of into its basic description.

So, if you have, say, innate Galileian relativity for most particles but not for light, which has a constant speed in a preferred rest frame, charged particles are going to act at least a little bit like relativity predicts - as the particle approaches light speed, the bow wave part of its E-field gets stronger and stronger, accumulating energy... as it approaches the speed of light the photon energy would diverge like on a jet about to go supersonic, except that unlike the jet, you simply can't get the photons out of the way fast enough.

Hmm. That really doesn't work, because it suggests that the mass dilation would vary proportional to charge, not to mass. So I withdraw the first notion too.

If Lorentzian relativity doesn't work, Galilean relativity is certainly not what replaces it. Moreover, I find it difficult to imagine how you could have photons transforming under Lorentz group and other particles transforming under Galilei (or whatever else) group. This will certainly produce inconsistencies, unless you managed to explain them away in some clever complicated way.

I would rather accept that neutrinos are indeed tachyons, whatever weird consequences it may have.

It only produces an inconsistency if you treat either one as fundamental and try to base the other one off of it. As stated, though, this is already withdrawn.

Edited to add: An explanation on the downvotes would be appreciated. I was wrong. I said so. Yet the post with the errors is sitting at zero, and the post explaining why I even ever made the error, and this one, are downvoted. Seems sort of weird.

It only produces an inconsistency if you treat either one as fundamental and try to base the other one off of it. As stated, though, this is already withdrawn.

Let's have a neutrino and a photon and assume that neutrinos transform under Galilei while photons transform under Lorentz. Adjust the impulse of the neutrino so that it moves exactly at v=c parallel to the photon. If they are fired towards the detector at the same time, they will be detected at the same time.

Now change the reference frame to one of an observer moving in the same direction at c/2 (or any other arbitrary velocity). With respect to this observer, the photon moves still at c according to Lorentz while the neutrino moves at c/2 according to Galilei. Therefore the photon will reach the detector before the neutrino does.

This is a paradox. Either the detection of the neutrino and the photon are one event or not, it cannot depend on reference frame.

The problem is that neutrinos can carry information. And if you carry information faster than light, you either have "time travel", or you screw up the whole framework of relativity. So... it's an easy claim that "relativity could simply not apply to them".

Well, what's so bad with "time travel" after all?

That it leads to paradoxes like "two people kill each other, each one being killed by the other before he pulls the trigger" in the case of a tachyon duel. And that kind of paradoxes (unlike the "I killed my grandfather" paradox) can't even be explained by alternative histories/Everett branches.

Existence of tachyons doesn't necessarily imply that paradoxes would be instantiated. Of course, even if there were tachyons, the world would still be consistent with itself. In the tachyon duel, perhaps the classification of the state of duellists (whether alive or dead) could depend on the reference frame, or perhaps the duel could not happen for some specific reason. Once you can write consistent equations of motion for elementary particles that allow for tachyons - and I think this is possible (although I don't know much about tachyons, maybe there are some problems which I am unaware of) - complex stories about killing grandfathers and shooting people with neutrinos must add up to normality, even if they seem weird at the beginning.