gRR comments on [Link] Quantum theory as the most robust description of reproducible experiments - Less Wrong

0 Post author: gRR 08 May 2014 11:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 08 May 2014 02:33:26PM 9 points [-]

I look at the abstracts of new papers on the quant-ph archive every day. This is a type of paper which, based on the abstract, I would almost certainly not bother to look at. Namely, it proposes to explain where quantum theory comes from, in terms which obviously seem like they will not be enough. I read the promise in the title and abstract and think, "Where is the uncertainty principle going to come from - the minimum combined uncertainty for complementary observables? How will the use of complex numbers arise?"

I did scroll through the paper and notice lots of rigorous-looking probability formalism. I was particularly waiting to see how complex numbers entered the picture. They show up a little after equation 47, when two real-valued functions are combined into one complex-valued function... I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information", so clearly this paper is part of that dubious trend.

At a guess - without having worked through the paper - I would say that the authors' main sin will turn out to be, that they do not do anything at all like deriving quantum theory - that instead their framework is something much, much looser and less specific - but then they give their article a title implying that they can derive the whole of QM from their loose framework. Not only do they thereby falsely create the impression that they have answered a basic question about reality, but their fake answer is a bland one, thereby dulling further interest, and it is presented with an appearance of rigor, making it look authoritative. I would also expect that, when they get to the stage of trying to derive actual QM, they have to compound their major sin with the minor one of handwaving in support of a preordained conclusion - that they will have to do something like join their two real-valued functions together, in a way which is really motivated only by their knowing what QM looks like, but for which they will have to invent some independent excuse, since they are supposedly deriving QM.

All the foregoing may be regarded as a type of prediction. They are the dodgy misrepresentations I would expect to find happening in the paper, if I actually sat down and scrutinized it in detail. I really don't want to do that since time is precious, but I also didn't want to let this post go unremarked. Is it too much to hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper? It may also be worth looking for existing criticisms of the "physics from Fisher information" school of thought - maybe someone out there has already written the ideal explanation of its shortcomings.

Comment author: gRR 08 May 2014 09:16:30PM 2 points [-]

Well, I liked the paper, but I'm not knowledgeable enough to judge its true merits. It deals heavily with Bayesian-related questions, somewhat in Jayne's style, so I thought it could be relevant to this forum.

At least one of the authors is a well-known theoretical physicist with an awe-inspiring Hirsch factor, so presumably the paper would not be trivially worthless. I think it merits a more careful read.

Comment author: Mitchell_Porter 09 May 2014 10:26:58AM 5 points [-]

Someone can build a career on successfully and ingeniously applying QM, and still have personal views about why QM works, that are wrong or naive.

Rather than just be annoyed with the paper, I want to identify its governing ideas. Basically, this is a research program which aims to show that quantum mechanics doesn't imply anything strikingly new or strange about reality. The core claim is that quantum mechanics is the natural formalism for describing any phenomenon which exhibits uncertainty but which is still robustly reproducible.

In slightly more detail: First, there is no attempt to figure out hidden physical realities. The claim is that in any possible world where certain experimental results occur, QM will provide an apt and optimal description of events, regardless of what the real causes are. Second, there is a determination to show that QM is somehow straightforward or even banal: 'quantum theory is a “common sense” description of the vast class of experiments that belongs to category 3a.' Third, the authors are inspired by Jaynes's attempt to obtain QM from Bayes, and Frieden's attempt to get physics from Fisher information, which they think they can justify for experiments that are "robustly" reproducible.

Having set out this agenda, what evidence do the authors provide? First, they describe something vaguely like an EPR experiment, make various assumptions about how the outputs behave, and then show that these assumptions imply correlations like those produced when a particular entangled state is used as input in a real EPR experiment. They also add that with different starting assumptions, they can obtain outputs like those of a different entangled state.

Then, they have a similarly abstracted description of a Stern-Gerlach experiment, and here they claim that they get the Born rule as a result of their assumptions. Finally, they consider a moving particle under repeated observation, and say that they can get the Schrodinger equation by assuming that the outcomes resemble Newtonian mechanics on average.

Their choice of case studies, and the assumptions they allow themselves to use, both seem rather haphazard to me. They make many appeals to symmetry, e.g. one of the assumptions in their EPR case study is that the experiment will behave the same regardless of orientation. Or in deriving the Schrodinger equation, they assume translational invariance. These are standard hypotheses in the ordinary approach to physics too, so it's not surprising that they should yield something like ordinary physics here, too... On the other hand, they only derive the Born rule in the special case of Stern-Gerlach, so they have probably done something tricky there.

In general, it seems that they decided in advance that QM would be derived from the assumption of uncertain but reproducible phenomena, and the application of Bayes-like reasoning, and nothing else... but then for each of their various case studies, they then did allow the use of whatever extra assumptions were necessary, to arrive at the desired conclusion.

So I do not regard the paper's philosophy as having merit. But the real demonstration of this would require engaging with each of their case studies in turn, and showing that special extra assumptions were indeed used. It would also be useful to criticize their definition of 'category 3a' experiments, by showing that there are experiments in that category which manifestly do not exhibit quantum-like behavior... I suspect that the properly corrected version of their paper would be something like "Quantum theory as the most robust description of reproducible experiments that behave like quantum theory".