http://www.nature.com/news/2011/110922/full/news.2011.554.html

http://arxiv.org/abs/1109.4897v1

http://usersguidetotheuniverse.com/?p=2169

http://news.ycombinator.com/item?id=3027056

Ereditato says that he is confident enough in the new result to make it public. The researchers claim to have measured the 730-kilometre trip between CERN and its detector to within 20 centimetres. They can measure the time of the trip to within 10 nanoseconds, and they have seen the effect in more than 16,000 events measured over the past two years. Given all this, they believe the result has a significance of six-sigma — the physicists' way of saying it is certainly correct. The group will present their results tomorrow at CERN, and a preprint of their results will be posted on the physics website ArXiv.org.

At least one other experiment has seen a similar effect before, albeit with a much lower confidence level. In 2007, the Main Injector Neutrino Oscillation Search (MINOS) experiment in Minnesota saw neutrinos from the particle-physics facility Fermilab in Illinois arriving slightly ahead of schedule. At the time, the MINOS team downplayed the result, in part because there was too much uncertainty in the detector's exact position to be sure of its significance, says Jenny Thomas, a spokeswoman for the experiment. Thomas says that MINOS was already planning more accurate follow-up experiments before the latest OPERA result. "I'm hoping that we could get that going and make a measurement in a year or two," she says.


Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone? I'd be curious to see your probability estimates for whether this theory pans out. Or other crackpot hypotheses to explain the results.

New to LessWrong?

New Comment
175 comments, sorted by Click to highlight new comments since: Today at 1:37 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]13y250

From an actual physicist:

Chang Kee Jung, a neutrino physicist at Stony Brook University in New York, says he'd wager that the result is the product of a systematic error. "I wouldn't bet my wife and kids because they'd get mad," he says. "But I'd bet my house."

4see13y
Yes, but what would he want as the opposing wager? I'll gladly put up a cent (or, for that matter, $10,000,000,000,000 ZWR) against his house, while I wouldn't consider betting $10,000.

I'll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on - that there will be no propagation of any cause-and-effect relation at faster than lightspeed.

My real probability is lower, but I think that anyone who'd bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I'm offering.

I will not accept more than $20,000 total of such bets.

I'll take that bet, for a single pound on my part against 99 from Eliezer.

(explanation: I have a 98-2 bet with my father against the superluminal information propagation being true, so this sets up a nice little arbitrage).

4FAWS13y
Is that c the speed of light in vacuum or c the constant in special relativity?
7Eliezer Yudkowsky13y
c is the constant as it appears in fundamental physical equations, relativistic or quantum. Anything slowing down the propagation of photons through an apparent vacuum (such as interaction with dark matter) which did not affect, for example, the mass-energy equivalence of E=MC2, would not win the bet.
2ChrisHallquist12y
If Kevin doesn't go through with taking that bet for $202, I'll take it for $101.
2Kevin13y
I suggest clarifying the bet to say "information propagating faster than c as c is defined at the time of this bet". With that clarification, I can pay up front in cash for $202 as soon as possible.
1Eliezer Yudkowsky12y
There are many definitions of c - it appears as a constant in many different physical equations. Right now, all of these definitions are consistent. If you have a new physics where all these definitions remain consistent and you can still transmit information faster than c, then certainly I have lost the bet. Other cases would be harder to settle - I did state that weird physics along the lines of "this is why photons are slowed down in a vacuum by dark matter, but neutrinos aren't slowed" wouldn't win the bet.
2MichaelHoward13y

Actually, what is the worst that could happen? It's not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.

The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it's a win-win.

Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, "MESS WITH TIME."

4JoshuaZ13y
If there are ways to violate causality they are likely restrictive enough that we can't use them to violate causality prior to when we knew about the methods (roughly). This is true for most proposed causality violating mechanisms. For example, you might be able to violate causality with a wormhole, but you can't do it to any point in spacetime prior to the existence of the wormhole. In general, if there are causality violating mechanisms we should expect that they can't violate causality so severely as to make the past become radically altered since we just don't see that. It is conceivable that such manipulation is possible but that once we find an effective method of violating causality we will be quickly wiped out (possibly by bad things related to the method itself) but this seems unlikely even assuming one already has a causality violating mechanism.
0wedrifid13y
Mostly agree. Would downgrade to "can't or won't". Apart from a little more completeness the difference makes a difference to anthropic considerations.
2pedanterrific13y
Does it even make sense to say "won't", or for that matter bring up anthropic considerations, in reference to causality violation? This is a serious question, I don't know the answer.
4JoshuaZ13y
I'm not sure. If a universe allows sufficient causality violation then it may be that it will be too unstable for observers to arise in that universe. But I'm not sure about that. This may be causality chauvinism.
0[anonymous]13y
(I feel like there's a joke to be made here, something to do with "causality chauvinism", "causality violation", "too unstable for observers to arise", the relative "looseness" of time travel rules, maybe also the "Big Bang"... it's on the tip of my brain... nah, I got nothing.)
2wedrifid13y
Yes. (Leave out the anthropics, when that makes sense to bring up is complicated.) Most of the reason for saying: ... are somewhat related to "causality doesn't appear to be violated". If (counterfactually) causality can be violated then it seems like it probably hasn't happened yet. This makes it a lot more likely that causality violations (like wormholes and magic) that are discovered in the future will not affect things before their discovery. This includes the set of (im)possible worlds in which prior-to-the-magic times cannot be interfered with and also some other (im)possible worlds in which it is possible but doesn't happen because it is hard. An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them. It would be much easier to set up a machine to receive messages from the future. It may be worth noting that "causality violation" does not imply "complete causality meltdown". The latter would definitely make "won't" rather useless.
0pedanterrific13y
Well, it's just... how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe. Well yes, I understand you probably couldn't use faster-than-light neutrinos from the future (FTLNFTFs) to effect changes in the year 1470 any more easily or precisely than, say, creating an equivalent neutrino burst to 10^10^9999 galaxies going supernova simultaneously one AU from Earth, presumably resulting in the planet melting or some such thing, I don't know. However, elsewhere in this thread I suggested a method that takes advantage of a system that already exists and is set up to detect neutrinos (admittedly not FTLNFTFs specifically, though I don't know why that should matter). I still don't see exactly what prevents Eliezer_2831 from fiddling around with MINOS's or CERN's observations in a causality-violating but not-immediately-obvious manner. Other than, you know, basic human decency.
1wedrifid13y
We obviously can't with certainty. But we can say it is highly unlikely. The universe looks to us like it has a consistent causal foundation rather than being riddled with arbitrary causality violations. That doesn't make isolated interventions impossible, just unlikely. Overwhelming practical difficulties. To get over 800 years of time travel in one hop using neutrinos going very, very slightly faster than light the neutrinos would have to be shot from a long, long way away. Getting a long, long, way away takes time and is only useful if you are traveling close enough to the speed of light that on the return trip the neutrinos gain more time than what you spent travelling. Eliezer_2831 would end up on the other side of the universe somewhere and the energy required to shoot enough neutrinos to communicate over that much distance would be enormous. The scope puts me in mind of the Tenth Doctor: "And it takes a lot of power to send this projection— I'm in orbit around a supernova. [smiling weakly] I'm burning up a sun just to say goodbye." I'm not sure if that scenario is more or less difficult than the remote neutrino manufacturing scenario. The engineering doesn't sound easy but once it is done once any time before heat death of the universe you just win. You can send anything back to (almost) any time.
2pedanterrific13y
Unless you're fighting Photino Birds. But that's pretty unlikely, yeah.
2Luke_A_Somers12y
That sounds like it's a reference to something awesome. Is it?
2pedanterrific12y
Fairly awesome, I'd say.
0JoshuaZ13y
In the context of almost every proposed causality violation mechanism I've seen seriously discussed, it really is can't, not won't. Wormholes aren't the only example. Tipler Cylinders for example don't allow time travel prior to the point when they started rotating. Godel's rotating universe has similar restrictions. Is there some time travel proposal I'm missing? I agree that when considering anthropic issues won't becomes potentially relevant if we had any idea that time travel could potentially allow travel prior to the existence of the device in question. In that case, I'd actually argue in the other direction: if such machines could exist, I'd expect to see massive signs of such interference in the past.
1wedrifid13y
There are plenty of mechanisms in which can't applies. There are others which don't have that limitation. I don't even want to touch what qualifies as 'seriously discussed'. I'm really not up to date with which kinds of time travel are high status.
1JoshuaZ13y
Ignore status issues. Instead focus on time travel mechanisms that don't violate SR. Are there any such mechanisms which allow such violation before the time travel device has been constructed? I'm not aware of any.
3MugaSofer12y
Alcubierre drives.
0pedanterrific13y
I'm pretty sure - not totally sure, I'm perfectly willing to be corrected by anyone with more knowledge of the physics than me, but still, pretty sure - that the stated objection would not preclude The Future from sending back time-travelling neutrinos to, say, the Main Injector Neutrino Oscillation Search in a pattern that spells out the Morse code for T-E-L-L--E-Y--D-N-M-W-T, possibly even in such a way that they wouldn't figure out the code until after CERN's results were published.
3JoshuaZ13y
This would be really difficult. The primary problem is that neutrinos don't interact with most things, so to send a signal you'd need to send a massive burst of neutrinos to the point where we should expect it to show up on other neutrino detectors also. The only plausible way this might work is if someone used a system at CERN, maybe the OPERA system itself in a highly improved and calibrated form to send the neutrinos back. Although if neutrinos can go back in time then so much of physics may be wrong that this sort of speculation is likely to be extremely unlikely to be at all helpful. This is almost like going to an 17th century physicist and asking them to speculate what things would be like if nothing could travel faster than the speed of light.
9Eliezer Yudkowsky13y
Yeah, see, I'm not betting against random cool new physics, I wouldn't offer odds like that on there not being a Higgs boson, I'm betting on the local structure of causality. Could I be wrong? Yes, but if I have to pay out that entire bet, it won't be the most interesting thing that happened to me that day. How confident am I of this? Not just confident to offer to bet at 99-to-1 odds. Confident enough to say... "Well, that was an easy, risk-free $202." Or to put it even more plainly:
4Kevin13y
The consequence of the FTL neutrinos CERN thinks they found at six sigma significance is not the breakdown of causality. You can have faster than light neutrinos without backwards propagation of information. This is not the end of normality, but a new normality, one where Lorentz invariance is broken. This would mean that there is a universal reference class that trumps but doesn't destroy relativity. If anything, a universal reference class seems like a stronger causal structure than relativity. This whole thing would be so normal, that there's a pre-existing effective field theory called the Standard Model Extension. http://en.wikipedia.org/wiki/Standard-Model_Extension http://en.wikipedia.org/wiki/Lorentz_transformation http://en.wikipedia.org/wiki/Lorentz_covariance http://en.wikipedia.org/wiki/Lorentz-violating_neutrino_oscillations is suggested WIkipedia skimming, http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/ is what gave me the intuition of the universal inertial frame. I'm at around 10% odds on this whole thing seeming like weak consensus in 3 years and something like >80% odds (on a very very long bet) that locally possible FTL information travel is possible outside of the local structure of causality.

It's not about transmitting information into the past - it's about the locality of causality. Consider Judea Pearl's classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of "causality" is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS - the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.

Right now, we have a physics where (if you don't believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.

In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around,... (read more)

This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.

And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I'm not trying to do that, just read off the style of the existing model). But anyway, I'd pay $20,000 to find out I'm that wrong - what I want to eliminate is the possibility of paying $20,000 to find out I'm right.

9JoshuaZ12y
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn't just weird systems. For example, a purely Newtonian universe won't obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn't assign such importance to local causality. This isn't a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks "well, if gravity doesn't always decrease at a 1/r^2 rate, will the three body problem still be hard?" The problem is that the set of systems which violate Lorentz is so large that saying this isn't that helpful. The vast majority of physicists aren't thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don't have reason to think about these issues. But even if they did, they probably wouldn't take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
6Username12y
And just like that, Many-Worlds clicked for me. It's now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn't until that sentence that I connected them to decoherence. Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I'm going to sign up once I've graduated and start bringing in income. Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you're explaining things. Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it's far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible. By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
8Mitchell_Porter12y
Username, you're having a small conversion experience here, going from "causality is local" to "wavefunction collapse is preposterous" to "I understand quantum suicide" to "I'd better sign up for cryonics once I graduate" in rapid succession. It's a shame we can't freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study. This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn't have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-) ETA: Some more remarks: Locality does not imply a maximum speed. Locality just means that causes don't jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the "next moment"; some influence moving at a googol light-years per second might come crashing in upon us.) Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let's say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the "points" themselves are whole configurations. "Locality" here refers to similarity between whole configurations. It means th
1Username12y
This may or may not be the result of day 2 of modafinil. :) I don't think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point. Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I'm not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman's thinking about balls. I'm going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I'm assigning a high probability that it's the correct conclusion. An upper speed limit doesn't matter - all that matters is that things are not instantaneous for locality to be valid. A conversion experience is a very appropriate term for what I'm going through. I'm having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I'm grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I'm also ecstatic, because I've just discovered the cheat code to the universe, and it works.
1Mitchell_Porter12y
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer's reasoning. Well, you're certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you're having your satori - though it's interesting from an experimental perspective. I've never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it's happening; unless I count myself. From my perspective, QM is far more likely to be derived from 't Hooft's holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It's good that you're having exciting ideas and new thoughts, and perhaps it's even appropriate to will yourself to believe them, because that's a way of testing them against the whole of the rest of your experience. But I still find it interesting how it is that people come to think that they know something new, when they don't actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
3Username12y
From experiences back when I was young and religious, I've learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I've learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won't completely downplay what I wrote. I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge. Reading 't Hooft's paper I could understand what he was saying, but I'm realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I'll update my statement that 'Many-Worlds is intuitively correct' to 'Copenhagen is intuitively wrong,' which I suppose is where my original logic should have taken me - I just didn't consider strong MWI alternatives. Determinism kills quantum suicide, so I'll have to move down the priority of cyronics (though the 'if MWI then quantum suicide then cyronics' logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I'm not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study. I don't see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough re
5JoshuaZ12y
Note also that some of the recent papers do explicitly discuss causality issues. See e.g. this one.
2JoshuaZ13y
Hmm, would you be willing to bet on either the 10% claim or the 80% claim? Everything you have said until the last paragraph seems reasonable to me, and then those extremely high probabilities jump out.
2Eugine_Nier13y
Not necessarily, there could be a distinguished frame of reference.
4Eliezer Yudkowsky13y
That might preserve before-and-after. It wouldn't preserve the locality of causality. Once you throw away c, you might need to take the entire frame of the universe into account when calculating the temporal successor at any given point, rather than just the immediate spatial neighborhood.
3Luke_A_Somers12y
There could be some other special velocity than c. Like, imagine there's some special reference frame in which you can send superluminal signals at exactly 2.71828 c in any direction. In other reference frames, this special velocity depends on which direction you send the signal. Lorentz invariance is broken. But the only implication for local causality is that you need to make your bubble 2.71828 times bigger.

People in this thread with physics backgrounds should say so so that I can update in your direction.

When I looked at the paper, my impression is that it was a persistent result in the experiment, which would explain publication: the experiment's results will be public and someone, eventually, will notice this in the data. Better that CERN officially notice this in the data than Random High Energy Physicist. People relying on CERN's move to publish may want to update to account for this fact.

3Jack13y
This is a really good point.
2Mercurial13y
Forgive me for being a bit slow, but I honestly don't understand what you mean. I don't know why their publishing the results needs explanation; they already said it was because they couldn't find an error and are hoping that someone else will find one if it's there. Is your point that the fact that CERN published this rather than someone else is to be taken as evidence of its veracity? Or do you mean something else?

Lets say you're a physicist maximizing utility. It's pretty embarrassing to publish results with mistakes in them and the more important the results the more embarrassing it would be to announce results later shown to be the product of some kind of incompetence. So one can usually expect published results of serious import to have been checked over and over for errors.

But the calculus changes when we introduce the incentive of discovering something before anyone else. This is particularly the case when the discovery is likely to lead to a Nobel prize. In this case a physicist might be less diligent about checking the work in order to make sure she is the first out with the new results.

Now in this case CERN-OPERA is pretty much the only game in town. No one else can measure this many neutrinos with this kind of accuracy. So it would seem like they could take all the time they needed to check all the possible sources of error. But if Hyena is right that OPERA's data is/was shortly going to be public then they risk someone outside CERN-OPERA noticing the deviation from expected delay and publishing the results. By itself that is pretty embarrassing and it introduces some controversy ... (read more)

1JoshuaZ13y
Neutrinos not neutrons (very different particles. Neutrons are much better understood and easier to work with.) There's work in the US at Fermilab which could reasonably measure things at this level of accuracy. I don't know much about the Japanese work by stuff related to SK might be able to do similar things. Other than those issues your analysis seems accurate. None of these points detract from the general thrust of your argument.
3Jack13y
Edited- Neutrinos, obviously. Brain fart. I think Fermilab-MINOS can measure such things but I believe I read they have to update and recalibrate a bunch of things to get more accuracy, first. (Recall MINOS already saw this same effect but not at a statistically significant level. Obviously, they now have an incentive to improve their accuracy.)
5PhilGoetz13y
I think Hyena means they had a reason to publish other than believing the result is correct.
0Hyena13y
Correct.
0Hyena13y
My point is that CERN's publication of the anomaly is implied by its existence and an assumption that CERN minimally competent to run a high-level research project. Therefore, the publication itself gives us no information we did not already have. (The paper itself doesn't even really give us anything important by noting the anomaly, either, since our beliefs are about the implications of the anomaly, so its existence in itself can't be part of the calculation.)
0Mercurial13y
Ah. Thank you for clarifying!

P= .95 the reporting will be much sparser when the results are overturned.

Relevant: The Beauty of Settled Science

I'm waiting for another experiment before I get too worked up about this result.

That MINOS saw something like this before is pretty interesting. Other thing to consider is SN1987A-- at the rate the CERN neutrinos were traveling we should have detected neutrinos of SN1987A four years before it was visible.

The fact that this was made public like this suggests they are very confident they haven't made any obvious errors.

This paper discusses the possibility of neutrino time travel.

There is a press conference at 10 AM EST.

I'll say 0.9 non-trivial experimental set-up error (no new physics but nothing silly either). 0.005 something incompetent or fraudulent. Remainder is new physics "something I don't know about, "neutrinos sometimes travel backwards in time" and "special relativity is wrong" 8000:800:1.

Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone?

Does that work? Once you beat light don't you just win the speed race? The in-principle upper bound on what can be influenced just disappears. The rest is just engineering. Trivial little details of how to manufacture a device that emits a finely controlled output of neutrinos purely by shooting other neutrinos at something.

6gwern13y
I think so; with any noticeable faster than C, can't you just ping-pong between paired receiver/emitters, gaining a little distance into the past with each ping-pong? (If you're only gaining a few nanoseconds with each cycle, might be expensive in equipment or energy, but for the right information from days/weeks in the future - like natural disaster warnings - it'd worth it, even ignoring hypercomputation issues.)
2SilasBarta13y
"Yeah, I only go a little into the past each time, but I make it up in volume!"
0Solvent13y
What's that a quote from? I'd just Google, but you changed a word or two, I think.
2SilasBarta12y
I just made it up, trying to be silly. It's just an application of the standard "low margin, make it up on volume". It barely even makes sense as a joke, since the idea is actually sound (or at least not unsound on its face). If you can go any amount into the past, then you could, it seems, stack the process so that you go as far as you want into the past.
2arundelo12y
I doubt Silas was thinking of this, but it reminded me of SNL's "First Citiwide Change Bank" commercial.
-4[anonymous]13y
That's what she said.
0MartinB12y
Go Mr. Parker!
0gwern12y
I'll be honest, reading that link, that show sounds terrible.
0MartinB12y
I like it. They used difficult and expensive time travel to undo major catastrophes.
1Baughn13y
Well, I'd say there's a significant chance you'd end up with a boom instead, for invoking the (quantum) chronology protection conjecture. That wouldn't necessarily stop you in all cases, though. It just means you need quantum computer-level isolation, or a signal that doesn't include any actual closed timelike curves - that is, you could hypothetically send a signal from 2011 Earth to 2006 Alpha Centauri so long as the response takes five years to get back.
1JoshuaZ13y
Hmm, I don't think most variants of chronology protection imply inherently destructive results. But your remark made me feel all of a sudden very worried that if is real this could be connected to the Great Filter. I'm almost certainly assigning this more emotional weight than the very tiny probability that is at all justified.
3wedrifid13y
I don't know about you but the emotion I associate with the possibility is fascination, curiosity and some feeling that we need a word for along the lines of entertainment-satisfaction. It's just so far out into far mode that it doesn't associate with visceral fear. And given the low probability it is one instance of disconnection of emotion to knowledge of threat that doesn't seem like a problem! :)
1Baughn13y
Don't worry, I'm pretty sure it'd be a tiny boom. ;) No free energy, after all.
0James_Miller13y
How does this relate to free energy?
3Oscar_Cunningham13y
If there was an explosion big enough to cause worldwide destruction, where would the energy come from?
0[anonymous]13y
What, as in "You fools, you've doomed us all!"?
1wedrifid13y
Hey, I'm not the one who broke physics. Take it up with CERN! ;)
0DanielLC13y
The problem that most of those people are probably guessing as to when it will be found to be mistaken.
0gwern13y
Any finding that it is mistaken will have a 'when' attached, I think...

Particles break light-speed limit?

My grandfather is doomed, doomed I say!

Mwahahaha!

0loqi13y
And what, if I may ask, are your plans for your grandmother?
3[anonymous]13y
It's gonna be Lazarus Long all over again -_-;
0[anonymous]13y
Aha! I knew wedrifid was my worst enemy!

I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn't a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn't being produced at much faster than the speed of light but only a tiny bit over.

I'm going to now proceed to list some of the 5%. I don't know enough to discuss their likelyhood in detail.

1) Neutrinos oscillating into a ... (read more)

6prase13y
The main problem with 3) is that if photons have mass, then we would observe differences in speed of light depending on energy at least as big as the difference measured now for neutrinos. This seems not to be the case and c is measured with very high accuracy. If photons traveled with some velocity lower than c, but constant independent of energy, that would violate special relativity.
0JoshuaZ13y
Yes, but we almost always measure c precisely using light near the visible spectrum. Rough estimates were first made based on the behavior of Jupiter and Saturn's moons (their eclipses occurred slightly too soon when the planets were near Earth and slightly too late when they were far from Earth). Variants of a Foucault apparatus are still used and that's almost completely with visible light or near visible light. One can also use microwaves to do clever stuff with cavity resonance. I'm not sure if there would be a noticeable energy difference. The ideal thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I'm not aware of any experiments that do that.
2prase13y
The experimental upper bound on photon mass is 10^-18 eV. The photons near visible spectrum have about 10^-3 eV, which means their relative deviation from c is of order 10^-30. Gamma would be even closer. I don't think mass of photon is measurable via speed of light.
2wedrifid13y
Err... build a broad spectrum telescope and look at an unstable stellar entity?
2JoshuaZ13y
That's an interesting idea. But the method one detects gamma rays or x-rays is very different than what one uses to detect light, so calibrating would be tough. And most unstable events take place over time, so this would be really tough. Look at for example a supernova- even the neutrino burst lasts on the order of tens of seconds. Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I'm not sure, would need to crunch the numbers. It certainly is an interesting idea. Hmm, what about actively racing them? Same method as yours but closer in. Set off a fusion bomb (which we understand really well) far away (say around 30 or 40 AU out). That will be on the order of a few light hours which might be enough to see a difference if one knew then that everything had to start at the exact same time.
4wedrifid13y
Short answer: The numbers come out in the ballpark of hours not seconds. Being closer in relies on trusting your engineering competence to be able to calibrate your devices well. Do it based off interstellar events and you just need to go "Ok, this telescope went bleep at least a few minutes before that one" then start scribbling down math. I never trust my engineering over my physics.
2Jack13y
Photons having mass would screw up the Standard Model too... right?
7RolfAndreassen13y
Not necessarily. (Disclaimer: Physics background but this is not my area of expertise; I am working from memory of courses I took >5 years ago). In electroweak unification, there are four underlying gauge fields, superpositions of which make up the photon, W bosons, and Z boson. You have to adjust the coefficients of the combinations very carefully to make the photon massless and the weak bosons heavy. You could adjust them slightly less carefully and have an extremely light, but not massless, photon, without touching the underlying gauge fields; then you can derive Maxwell and whatnot using the gauge fields instead of the physical particles, and presumably save SR as well. Observe that the current experimental upper limit on the photon mass (well, I say current - I mean, the first result that comes up in Google; it's from 2003, but not many people bother with experimental bounds on this sort of thing) is 7x10^{-19} eV, or what we call in teknikal fiziks jargon "ridiculously tiny".
3prase13y
SR doesn't depend on behaviour of gauge fields. Special relativity is necessary to have a meaningful definition of "particle" in field theory. The gauge fields have to have zero mass term because of gauge invariance, not Lorentz covariance. The mass is generated by interaction with Higgs particle, this is essentially a trick which lets you forget gauge invariance after the model is postulated. It doesn't impose any requirements on SR either.
1RolfAndreassen13y
I was thinking of how Lorentz invariance was historically arrived at: From Maxwell's equations. If the photon has mass, then presumably Maxwell does not exactly describe its behaviour (although with the current upper bound it will be a very good approximation); but the underlying massless gauge field may still follow Maxwell.
2prase13y
First we may clarify what is exactly meant by "following Maxwell". For example in electrodynamics (weak interaction switched off) there is interaction between electron field and photons. Is this Maxwell? Classical Maxwell equations include the interaction of electromagnetic field and current and charge densities, but they don't include equation of motion for the charges. Nevertheless, we can say that in quantum electrodynamics 1. photon obeys Maxwell, because the electrodynamics Lagrangian is identical to the classical Lagrangian which produces Maxwell equations (plus equations of motion for the charges) 2. photon doesn't obey Maxwell, because due to quantum corrections there is an extremely weak photon self-interaction, which is absent in classical Maxwell. See that the problem has nothing to do with masses (photons remain massless in QED), Glashow-Weinberg-Salam construction of electroweak gauge theory or Higgs boson. The apparent Maxwell violation (here, scattering of colliding light beams) arise because on quantum level one can't prevent the electron part of the Lagrangian from influencing the outcome even if there are no electrons in the initial and final state. Whether or not is this viewed as Maxwell violation is rather choice of words. The electromagnetic field still obeys equations which are free Maxwell + interaction with non-photon fields, but there are effects which we don't see in the classical case. Also, those violations of Maxwell are perfectly compatible with Lorentz covariance. In the case of vector boson mass generation, one may again formulate it in two different ways: 1. the vector boson follows Maxwell, since it obeys equations which are free Maxwell + interaction with Higgs 2. it doesn't follow Maxwell, because the interaction with Higgs manifests itself as effective mass Again this is mere choice of words. Now you mentioned the linear combinations of non-physical gauge fields which give rise to physical photon and weak interaction b
0RolfAndreassen13y
Ok, I sit corrected. This is what happens when an experimentalist tries to remember his theory courses. :)

Ok. I think there's one thing that should be stated explicitly in this thread that may not have been getting enough attention (and which in my own comments I probably should have been more explicit.)

The options are not "CERN screwed up" and "neutrinos can move faster than c." I'm not sure about the actual probabilities but P(neutrinos can move faster than c|CERN didn't screw up) is probably a lot less than P(Weird new physics that doesn't require faster than light particles|CERN didn't screw up).

0Oscar_Cunningham13y
I did say "Error caused by new physical effect. P = 0.15" right in the first comment in this thread. It's just that we don't know enough about the design of the experiment to say much about it. Do you know how the neutrinos were generated/detected?
2JoshuaZ13y
The neutrino generation is somewhat indirect. Protons are accelerated into graphite, and then the resulting particles are accelerated further in the correct direction so that they decay into muons and muon neutrinos. The muons are quickly lost (muons don't like to interact with much but a few kilometers of solid rock will block most of them). The detector itself is setup to detect specifically the neutrinos which have oscillated into tau neutrinos. The detector itself is a series of lead plates with interwoven layers of light-sensitive material which has then scintillator counters to detect events in the light sensitive stuff. I don't fully understand the details for the detector. (In particular I don't know how they are differentiating tau neutrinos hitting the lead plates from muon neutrinos or electron neutrinos) but I naively presume that there's some set of characteristic reactions which occur for the tau neutrinos and not the other two. Since this discrepancy is for neutrinos in general, and they seem to be picking up data for all the neutrinos (I think?) that should't be too much of an issue. I've heard so far only a single hypothesis of new physics without faster than light travel involving suppression of virtual particles and I don't have anywhere near the expertise to guess if that sort of thing is at all plausible.
5Dreaded_Anomaly13y
There is a conserved quantity* for elementary particles that is called "lepton number." It is defined such that leptons (electrons, muons, taus, and their respective neutrinos) have lepton number +1, and anti-leptons (positrons, antimuons, antitaus, and antineutrinos) have lepton number -1. Further, the presence of each flavor (electron, muon, tau) is conserved between the particles and the corresponding neutrinos. For example, take the classic beta decay. A neutron decays to a proton, an electron, and an electron antineutrino. The neutron is not a lepton, so lepton number must be conserved at zero. The electron has lepton number +1 and the electron antineutrino has lepton number -1, totaling zero, and the "electron" flavor is conserved between the two of them. Now, think about an inverse beta decay: an electron antineutrino combines with a proton to form a neutron and a positron. The electron antineutrino has lepton number -1, and so does the positron that is created; again, the "electron" flavor is conserved. How does this apply to tau neutrinos? Reactions similar to an inverse beta decay occur when the other flavors of neutrinos interact with particles in the detector, but their flavors must be conserved, too. So, when a tau neutrino interacts, it produces a tau particle. A tau can be distinguished from an electron or muon in the detector by its mass and how it decays. *This conservation is actually violated by neutrino oscillations, but it still holds in most other interactions.
0JoshuaZ13y
Ok. That was basically what I thought was happening. Thanks for clarifying.

My probability distribution of explanations:

  • Neutrinos move faster than light in vacuum: P = 0.001
  • Error in distance measurement P = 0.01
  • Error in time measurement P = 0.4
  • Error in calculation P = 0.1
  • Error in identification of incoming neutrinos P = 0.1
  • Statistical fluke P = 0.1
  • Outright fraud, data manipulation P = 0.05
  • Other explanation 0.239

Having read the preprint, about the only observation is that I think you’re overestimating the fraud hypothesis.

There’s almost a whole page of authors, the preprint describes only the measurement, and finishes with something like (paraphrasing) “we’re pretty sure of seeing the effect, but given the consequences of this being new physics we think more checking is needed, and since we’re stumped trying to find other sources of error, we publish this to give others a try too; we deliberately don’t discuss any possible theoretical implications.”

At the very least, this is the work of the aggregate group trying very hard to “do it right”; I guess there could still be one rogue data manipulator, but I would give much less than 1 in 20 that nobody else in the group noticed anything funny.

9Jack13y
Your statistical fluke estimate is too high, experiment was repeated like 16,000 times.
7prase13y
They 1) have measured 16,000 neutrinos and found each one above c, or 2) they run the experiment 16,000 times, each run consisting of many measurements, and found that each run produced the result, or 3) they measured 16,000 neutrinos, analysed the data once and found that on average the velocity is higher than c, with 6σ significance?
5Jack13y
Yeah, it's more complicated than all of those but (3) is the closest.
1FAWS13y
That doesn't exhaust all possibilities, though it seems to have been 3).
0Mercurial13y
Bear in mind that many parapsychological experiments have been repeated vastly more than that. My impression is that anyone who wants to argue that this is extremely unlikely to be a statistical fluke is going to have a much harder time viewing parapsychology as the control group for science.

The comparison to parapsychology is a really poor one in this case-- for what should be pretty obvious reasons. For example, we know there is no file drawer effect. What we know about neutrino speed so far comes from a)Supernova measurements which contradict these results but measured much lower energy neutrinos and b)direct measurements that didn't have the sample size or the timing accuracy to reveal the anomaly OPERA discovered.

But more importantly this was a six sigma deviation from theoretical prediction. As far as I know, that is unheard of in parapsychology.

We cannot treat physics the way we treat psychology.

3Mercurial13y
Well, whatever this might say about me, the reasons aren't obvious to me. Right, but as I understand it, you don't need a file drawer effect to see that some of the experiments done in parapsychology still have devastatingly tiny p-values on their own, such as through the Stanford Research Institute.. So the file drawer effect isn't really the right way to challenge the analogy. I actually don't know what that means. Is sigma being used to indicate standard deviation? If so, then yes, there have been a number of parapsychology experiments that went in that range of accuracy - some moreso if I recall correctly. (It has been many years since I read into that stuff, so I could be misremembering.) My point is actually more about statistics than science, so any system that uses frequentist statistics to extract truth is going to suffer from this kind of comparison. As I understand it, the statistical methods that are used to verify measurements like this FTL neutrino phenomenon are the same kinds of techniques used to demonstrate that people can psychokinetically affect random-number generators. So either parapsychology is ridiculous because it uses bad statistical methods (in which case there's a significant chance that this FTL finding is a statistical error), or we can trust the statistical methods that CERN used (which seems to force us to trust the statistical methods that parapsychologists use.) (Disclaimer: I'm not trying to argue anything about parapsychology here. I'm only attempting to point out that, best as I can tell, the argument for parapsychology as the control group for science seems to suggest that the CERN results stand a fair chance of being bad statistics in action. If A implies B and we're asserting probably-not-B, then we have to accept probably-not-A.)
1Jack13y
How is that? You need to provide links because I read a fair bit on the subject and don't recall this. If I came across such results my money would be on fraud of systematic error- not a statistical fluke. This is the kind of "outside-view-taken to the extreme" attitude that just doesn't make sense. We know why the statistical results of para-psychological studies tend to not be trustworthy- publication bias, file drawer effect, exploratory research turned into hypothesis testing retroactively etc. If we didn't know why such statistical results couldn't be trusted the we would be compelled to seriously consider para-psychological claims. My claim is that those reasons don't apply to neutrino velocity measurements.
0Mercurial13y
That's a fair request. I don't really have the time to go digging for those details, though. If you feel so inspired, again I'd point to the work done at the Stanford Research Institute (or at least I think it was that) where they did a ridiculous number of trials of all kinds and did get several standard deviations away from the expected mean predicted based on the null hypothesis. I honestly don't remember the numbers at all, so you could be right that there has never been anything like a six-s.d. deviation in parapsychological experiments. I seem to recall that they got somewhere around ten - but it has been something like six years since I read anything on this topic. That said, I get the feeling there's a bit of goalpost-moving going on in this discussion. In Eliezer's original reference to parapsychology as the control group for science, his point was that there are some amazingly subjective effects that come into play with frequentist statistics that could account for even the good (by frequentist standards) positive-result studies from parapsychology. I agree, there's a lot of problem with things like publication bias and the like, and that does offer an explanation for a decent chunk of parapsychology's material. But to quote Eliezer: I haven't looked at the CERN group's methods in enough detail to know if they're making the same kind of error. I'm just trying to point out that we can't assign an abysmally low probability to their making a common kind of statistical error that finds a small-but-low-p-value effect without simultaneously assigning a lower probability to parapsychologists making this same mistake than Eliezer seems to. And to be clear, I am not saying "Either the CERN group made statistical errors or telepathy exists." Nor am I trying to defend parapsychology. I'm simply pointing out that we have to be even-handed in our dismissal of low-p-value thinking.
2wedrifid13y
That doesn't actually strike me as all that much extra improbability. A whole bunch of the mechanisms would allow both!

Can it be used to send messages?

4Baughn13y
Yes.

Relevant updates:

John Costella has a fairly simple statistical analysis which strongly suggests that the the OPERA data is statistically significant (pdf). This of course doesn't rule out systematic problems with the experiment which still seem to be the most likely.

Costella has also proposed possible explanations of the data. See 1 and 2. These proposals focus on the idea of a short-lived tachyon. This sort of explanation helps explain the SN 1987a data. Costella points out that if the muon-neutrino pair is becoming tachyonic through the initial hadron ba... (read more)

More relevant papers:

"Neutrinos Must Be Tachyons" (1997)

Abstract: The negative mass squared problem of the recent neutrino experiments from the five major institutions prompts us to speculate that, after all, neutrinos may be tachyons. There are number of reasons to believe that this could be the case. Stationary neutrinos have not been detected. There is no evidence of right handed neutrinos which are most likely to be observed if neutrinos can be stationary. They have the unusual property of the mass oscillation between flavors which has not be... (read more)

7see13y
Tachyonic neutrinos can explain SN 1987A neutrinos beating photons to Earth, and tachyonic neutrinos can explain the CERN observations, but, critically, they cannot explain both phenomena simultaneously. The SN 1987A neutrinos apparently moved slower than the CERN neutrinos, when the pure tachyonic explanation would have them move faster than the CERN neutrinos. This isn't to say neutrinos couldn't be tachyons, but it would still leave the CERN data requiring an explanation.
4JoshuaZ13y
Your point is correct. But I'd also like to note that in case anyone thinks that SN 1987A is a problem for physics- the conventional model explains SN 1987A neutrinos beating the photos to Earth. Neutrinos are produced in the core of a star when it goes supernova. Light has to slowly works its way out from the core going through all the matter, or is produced at the very upper stages of the star. Neutrinos don't interact with much matter so they get to go through quickly and so get a few hours head start. Since they are traveling very close to the speed of light they can arrive before the light. This is the conventional explanation. If neutrinos routinely traveled faster than light, we'd expect the SN 1987A neutrinos to have arrived even earlier than the three hours they arrived before the light. In particular, if they traveled as fast as CERN predicted then they should have arrived about 3-5 year before the photons. Now, we didn't have good neutrino detectors much before 1987 so it is possible that there was a burst we missed in that time range. But if so, why was there a separate pack of much slower neutrinos that arrived when we expected? There may be possible explanations for this that fit both data . It is remotely possible for example that the tauon and muon neutrinos are tachyons but the electron neutrino is not, or that all but the electron neutrino are tachyons. If one then monkeyed with the oscillation parameters it might be possible to get that the CERN sort of beam would arrive fast but the beam from SN 1987A would arrive at the right time. I haven't worked the numbers out, but my understanding is that we have not awful estimates for the oscillation behavior which should prevent this kiudge from working. It might work if one had another type of neutrino since that would give you six more parameters to play with. Other experiments can upper bound the number of neutrino types with a high probability, and the standard estimates say that there probably are
2Oscar_Cunningham13y
Is this just assuming that they travel at the same speed as recorded for the CERN ones, or has any adjustment been made for their differing energies?
1JoshuaZ13y
This is from a naive, back of the envelope calculation without taking differing energies into account. One thing to note that by some estimates tachyons should slow down as they get more energy. If that's the case then the discrepancy may make sense since the neutrinos from the supernova should be I think higher energy.
3Oscar_Cunningham13y
Nope. As I said here the ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV.
3JoshuaZ13y
Ok. In that case this hypothesis seriously fails.
5Oscar_Cunningham13y
I hadn't realised is that neutrinos have never been observed going slower than light. If they had been observed going slower than light, then finding them also going faster would be absurd, since it would require infinite energy. But if they are always tachyons then them travelling faster than c is much less problematic. However I don't see how this explains the neutrinos from the supernova. In the paper it says that higher energies correspond to lower speeds (due to imaginary mass). The ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV. But the difference in time for the supernova was proportionately smaller than that for the CERN neutrinos.
2DanielLC13y
Perhaps CERN's experiment was in error. So, even if neutrinos really do go faster than light, CERN messed up.

The neutrinos are not going faster than light. P = 1-10^-8

Error caused by some novel physical effect: P = 0.15

Human error accounts for the effect (i.e. no new physics): P= 0.85

This isn't even worth talking about unless you know a serious amount about the precise details of the experiment.

EDIT: Serious updating on the papers Jack links to downthread. I hadn't realised that neutrinos have never been observed going slower than light. P = no clue whatsoever.

4Kevin13y
I'm stupid so I shouldn't talk about physics? That's absurd, Less Wrong is devoted to discussing exactly this kind of thing. Like... really? I'm really confused by your comment. Do you think the author of the Nature News piece should not have written for fear of causing people to think about a result? This kind of comment you made is one of the most perniciously negative types of things you could say here. Please try not to stop discussion before it even starts. Instead of shutting down discussion and saying it isn't worth talking about, maybe you should try and expand on "Error caused by some novel physical effect".
4Oscar_Cunningham13y
You're not stupid, but we're not (as far as I know) qualified to talk about this particular experiment. There's no hope in hell that the particles are going faster than light, so the only interesting discussion is what else could be causing the effect. This would involve an in depth knowledge of particle physics, as well as the details of the experiment, how the speed was calculated, the type of detector being used, etc. I don't work at CERN, and I don't think many LessWrongers do either. LessWrong is for discussing rationality not physics. Assigning probabilities to the outcomes stretched my rationalist muscles (I wasn't sure about 10^-8. Too high? Too low?), but that's the only relevance this post has (and yes, I did downvote it). It would be fine to report the anomalous result, and give an interesting exploration of what faster than light particles would imply, making it clear that it's horrendously unlikely. But presenting it as if the particles might actually be going faster than light is misleading. I've heard that the detector works by having the neutrinos hit a block where they produce some secondary particles, the results are then inferred from these particles. If these particles are doing something novel, or if the neutrinos are producing an unexpected kind of particle, then this could lead to the errors observed. EDIT: I'm being too harsh. LessWrongers with less knowledge of the relevant physics would be perfectly justified in assigning a much higher probability to FTL than I do, and they've got no particular reason to update on my belief. Similarly, I expect my probability assignment would change if I learnt more physics.
6khafra13y
I believe I am more skeptical than the average educated person about press releases claiming some fundamental facet of physics is wrong. But I would happily bet $1 against $10,000,000 that they have, indeed, observed neutrinos going faster than the currently understood speed of light.
7Kevin13y
Taken! Paypal address?
1khafra13y
I'd rather do it through an avenue other than Paypal, since I give odds near unity that if I won, Paypal would freeze my account before I could withdraw the $10 million. Also, considering that less than .01% of the world's population has access to $10 million USD in a reasonably liquid form, there's some counterparty risk. But, IIRC, you're confident you have the resources to produce a subplanetary mass of paperclips within a few decades, so let's do it!
1Kevin13y
Oh, sorry, I was confused and thought you were offering the bet the other way around.
5khafra13y
I apologize for being ambiguous; I should have been more clear that 10^-8 was way too low. Hopefully you weren't counting on those resources for manufacturing paperclips.
0Oscar_Cunningham13y
Sadly I'm not in possession of even 10^8 cents, so I can't make this bet.
5khafra13y
If you have a bitcoin address, the smallest subdivision of a bitcoin against 1 bitcoin (historically, 1 bitcoin has been worth somewhere within $10 of $10) would do the tric.
1XiXiDu13y
From here.
1Oscar_Cunningham13y
Which part of my post is this addressed to? I don't see any direct relevance.
1Thomas13y
Or the light is slightly subluminal and the nevtrinos are (almost) luminal at their speed. May be a bunch of reasons, more probable than the assumed one.
0prase13y
What do you mean by that light is subluminal? Literally it means that light travels slower than light, which is probably not the intended meaning.
4[anonymous]13y
I suspect he means that light maybe travels slightly slower than the constant c used in relativity. Maybe photons actually have a really tiny rest-mass. Maybe our measurements of the speed of light are all in non-perfect vacuum which makes it slow down a little bit.
7prase13y
If they had tiny mass, we would observe variance in measured values of c, since less energetic photons would move slower. Measurements of c have relative precision of at least 10^-7 and no dependence on energy has been observed in the vacuum. Therefore the measured speed of light doesn't differ from the relativistic c more than by 10^-7. The relative difference which is reported in the neutrinost seems to be 10^-5.
3Thomas13y
Kloth answerd as I would. By the way, electrons in water can be faster than photons in water. No big surprise maybe, if this hapens with neutrinos and photons in a (near) vacuum.
-1Oscar_Cunningham13y
Light can move more slowly while not in a vacuum, maybe this light was held up by something. That said, I don't understand the paper well enough to tell if they are directly racing the neutrinos against some actual light, or if they're just comparing it to an earlier mesurement.
4AlexMennen13y
I don't know whether this guy knows what he's talking about, but it sounds plausible: Steven Sudit:
3shminux13y
There have been no indications that one can transmit information FTL using the Casimir effect, the work he mentions was on quantum tunneling time, which is a different beast.
3JoshuaZ13y
That doesn't work. They didn't race the neutrinos against a light beam. They measured the distance to the detector using sensitive GPS.
0Thomas13y
Are they THAT sensitive? Possibly not.
3JoshuaZ13y
In order for this to be from an error in measurement you need to be a few meters off (18 meters if that's the only problem). There are standard GPS techniques and surveying techniques which can be used to get very precise values. They state in the paper and elsewhere that they are confident to around 30 cm. Differential GPS can have accuracy down to about 10-15 cm, and careful averaging of standard GPS can get you in the range of 20 cm, so this isn't at all implausible but it is still a definite potential source of error. A more plausible issue is that since parts of the detectors are underground they didn't actively use GPS for those parts. But even then, a multiple meter error seems unlikely, and 18 meters is a lot. It is possible that there's a combination of errors all going in the same direction, say a meter error in the distance, a small error in the clock calibration, etc. And all of that add up even as each error remains small enough that it is difficult to detect. But they've been looking at things really closely so one would then think that at least one of the errors would turn up.

There's now a theoretical paper up on the arxiv discussing a lot of these issues . The authors are respected physics people it seems. I have neither the time nor the expertise to evaluate it, but they seem to be claiming a resolution between the OPERA data and the SN 1987A data.

The best short form critique of this announcement I have seen is the post by theoretical physicist Matthew Buckley on the metafilter website:

Matt's comment.

After I read that comment I clicked through to his personal website and I found a nifty layman's explanation of the necessity for Dark Matter in current cosmo theoy:

Matt's web essay on dark matter.

If you don't have time to read his comment, what he says is that the results are not obviously bogus but they are so far-fetched that almost no physicists will find their daily work affected by the provisional... (read more)

Obligatory xkcd reference http://xkcd.com/955/

-26NihilCredo13y

Sean Carroll has made a second blog post on the topic, to explain why faster-than-light neutrinos do not necessarily imply time travel.

The usual argument that faster than light implies the ability to travel on a closed loop assumes Lorentz invariance; but if we discover a true FTL particle, your first guess should be that Lorentz invariance is broken. (Not your only possible guess, but a reasonable one.) Consider, for example, the existence of a heretofore unobserved fluid pervading the universe with a well-defined rest frame, that neutrinos interact wit

... (read more)

To quote one of my professors, from the AP release:

Drew Baden, chairman of the physics department at the University of Maryland, said it is far more likely that there are measurement errors or some kind of fluke. Tracking neutrinos is very difficult, he said.

"This is ridiculous what they're putting out," Baden said, calling it the equivalent of claiming that a flying carpet is invented only to find out later that there was an error in the experiment somewhere. "Until this is verified by another group, it's flying carpets. It's cool, but ..

... (read more)

Forgive my ignorance, but... if distance is defined in terms of the time it takes light to traverse it, what's the difference between "moving from A to B faster than the speed of light" and "moving from B to A"?

6DanielLC13y
There's three things you can do: * Move from A to B. * Move between A and B faster than the speed of light. (It's uncertain which is the start and which is the end.) * Move from B to A.
2orthonormal13y
For the basic physics answer, look at Minkowski space: you can define when two events shouldn't be able to effect each other at all if nothing travels faster than light (i.e. they're separated by a spacelike interval). More basically, we know the direction of causality from other factors; so if the neutrinos are emitted at A and interact with something at B, and both events increase entropy, then you either have to say that they traveled faster than light or that they violated the Second Law of Thermodynamics.
1Owen13y
You are correct: moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame, according to special relativity.
0PhilGoetz13y
Second 'faster' should be 'slower', I think.
3Owen13y
Shinoteki is right - moving slower than light is timelike, while moving faster than light is spacelike. No relativistic change of reference frame will interchange those.
0PhilGoetz13y
What do you mean by "spacelike"? IIRC, movement in spacetime is the same no matter which axis you designate as being time.
3JoshuaZ13y
No. The metric treats time differently from space even as they are all on a single manifold. The Minkowski metric has three spacial dimensions with a +, and time gets a -. This is why space and time are different. Thinking of spacetime as R^4 is misleading because one doesn't have the Euclidean metric on it.
1shinoteki13y
It shouldn't. Moving from B to A slower than light is possible*, moving from A to B faster than light isn't, and you can't change whether something is possible by changing reference frames. *(Under special relativity without tachyons)
0PhilGoetz13y
What I'm trying to get at is, What does a physicist mean when she says she saw X move from A to B faster than light? The measurement is made from a single point; say A. So the physicist is at A, sees X leave at time tX, sends a photon to B at time t0, and gets a photon back from B at time t1, which shows X at B at some time tB. I'm tempted to set tB = (t0+t1)/2, but I don't think relativity lets me do that, except within a particular reference frame. "X travelled faster than light" only means that tX < t1. The FTL interpretation is t0 < tX < tB < t1: The photon left at t0, then X left at tX, and both met at B at time tB, X travelling faster than light. Is there a mundane interpretation under which tB < tX < t1? The photon left A at t0, met X at B at tB, causing X to travel back to A and arrive there at tX. The answer appears to be No, because X would need to travel faster than light on the return trip. And this also explains that Owen's original answer was correct: You can say that X travelled from A to B faster than light, or from B to A faster than light.
0[anonymous]13y
An interpretation putting t1<tX seems to have the photon moving faster than light backwards in time to get from B back to A
0PhilGoetz13y
My question is whether he meant to say * moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame or * moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A slower than the speed of light in another reference frame both of which involve moving faster than light.
1Owen13y
I meant the first one: faster than light in both directions. You can think of it this way: if any reference frame perceived travel from B to A slower than light, then so would every reference frame. The only way for two observers to disagree about whether the object is at A or B first, is for both to observe the motion as being faster than light.
0shinoteki13y
I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true. Moving from B to A slower than the speed of light does not involve moving faster than light.