Yes, those are all possibilities for what I am looking for. I'll let the experts decide: I'll be glad to read a coherent defense of Copenhagen, objective collapse, etc. or whatever it is that Hugh Everett/David Deutsch/Max Tegmark/Sean Carroll/etc are up against.
You may be interested in (if you haven't already encountered) the "QBist" interpretation espoused by Fuchs, Mermin, Schack and others. Here are links to some appropriate papers by Fuchs, who in my opinion expresses the position most eloquently and efficiently:
http://arxiv.org/abs/1003.5209
http://arxiv.org/abs/1311.5253
http://arxiv.org/abs/quant-ph/0205039
I personally see QBism as quite a natural extension of classical Bayesianism to quantum mechanics, and I am surprised that it is not discussed at all in this community. Given the interest that Less Wrong members have in quantum theory and its foundations, I can only surmise that this niche is due to some kind of idolization of Eliezer and his views. I am somewhat placated by your inclusion of Kent's paper in your list of coherent anti-MWI arguments, although I would love to see more of the genuine academic debate surrounding the interpretation and foundations of quantum theory faithfully reflected in this forum.
I'm not a physicist, I'm a programmer. If I tried to simulate the Many-Worlds Interpretation on a computer, I would rapidly run out of memory keeping track of all of the different possible worlds. How does the universe (or universe of universes) keep track of all of the many worlds without violating a law of conservation of some sort?
This comment is old, but I think it indicates a misunderstanding about quantum theory and the MWI so I deemed it worth replying to. I believe the confusion lies in what "World" means, and to whom. In my opinion Everrett's original "Relative-State Formalism" is a much better descriptor of the interpretation, but no matter.
The distinct worlds which are present after a quantum-conditional operation are only distinct worlds according to the perspective of an observer who has engaged in the superposition. To an external observer, the system is still in a single state, albeit a state which is a superposition of "classical" states. For example, consider Schrodinger's cat. What MWI suggests is that quantum superposition extends even to the macroscopic level of an entire cat. However, the evil scientist standing outside the box considers the cat to be in state (Dead + Alive) / sqrt(2), which is a single pure state of the Cat System. Now consider the wavefunction of the universe, which I suppose must exist if we take MWI to its logical end. The universe has many subsystems, each of which may be in superpositions of states according to external observers. But no matter how subsystems might divide into superpositions of states, the overall state of the universe is a single pure state.
In sum: for the universe to "keep track of worlds" requires no more work than for there to exist a wavefunction which describes to state of the universe.
I've never understood why explaining the Born Rule is less of a problem for any of the other interpretations of QP than it is for MWI. Copenhagen, IIRC, simply asserts it as an axiom. (Rather, it seems to me that MWI is one of the few that even tries to explain it!)
As I understand, it's less of a problem for a hardline Copenhagen interpretation because no definite ontological status is assigned to the wavefunction, or indeed the collapse of the wavefunction. CI can roughly be paraphrased as
"Consider this set of rules for predicting experimental outcomes. Look how well it works! Of course, we're not asserting anything about actual reality here".
One of those rules is the Born rule. Another is the fact that physical transformations correspond to unitary maps on the Hilbert space. All of them are postulated, and their correctness is a matter of experimental falsification/verification.
Conversely, MWI assigns definite reality to the wavefunction, but denies that collapse is a real process, and does not postulate any rules about predictions of experimental outcomes. Instead, the claim that a process of measurement inevitably results in a single result being recorded - with probability given by the square amplitude of the wavefunction - must be derived from the pre-existing structure of the theory (possibly with some reasonable assumptions about gambling commitments).
A conceivable alternative to MWI might have the Born rule as an additional postulate, supported only by experiment rather than following from the structure of the theory. I feel that this would be much less appealing to many of its advocates.
Have you ever read about the so-called Bayesian approach to quantum mechanics promoted by Caves, Fuchs, and Schack?
"Comparisons have also been made between QBism and the relational quantum mechanics espoused by Carlo Rovelli and others" (WP)
;-)
I'd like to know what you're implying with this post, but I'm unable to make a confident guess. Are you claiming that this WP quotation has something to do with many worlds?
I look at the abstracts of new papers on the quant-ph archive every day. This is a type of paper which, based on the abstract, I would almost certainly not bother to look at. Namely, it proposes to explain where quantum theory comes from, in terms which obviously seem like they will not be enough. I read the promise in the title and abstract and think, "Where is the uncertainty principle going to come from - the minimum combined uncertainty for complementary observables? How will the use of complex numbers arise?"
I did scroll through the paper and notice lots of rigorous-looking probability formalism. I was particularly waiting to see how complex numbers entered the picture. They show up a little after equation 47, when two real-valued functions are combined into one complex-valued function... I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information", so clearly this paper is part of that dubious trend.
At a guess - without having worked through the paper - I would say that the authors' main sin will turn out to be, that they do not do anything at all like deriving quantum theory - that instead their framework is something much, much looser and less specific - but then they give their article a title implying that they can derive the whole of QM from their loose framework. Not only do they thereby falsely create the impression that they have answered a basic question about reality, but their fake answer is a bland one, thereby dulling further interest, and it is presented with an appearance of rigor, making it look authoritative. I would also expect that, when they get to the stage of trying to derive actual QM, they have to compound their major sin with the minor one of handwaving in support of a preordained conclusion - that they will have to do something like join their two real-valued functions together, in a way which is really motivated only by their knowing what QM looks like, but for which they will have to invent some independent excuse, since they are supposedly deriving QM.
All the foregoing may be regarded as a type of prediction. They are the dodgy misrepresentations I would expect to find happening in the paper, if I actually sat down and scrutinized it in detail. I really don't want to do that since time is precious, but I also didn't want to let this post go unremarked. Is it too much to hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper? It may also be worth looking for existing criticisms of the "physics from Fisher information" school of thought - maybe someone out there has already written the ideal explanation of its shortcomings.
I wonder if you would apply the same criticism to so-called "derivations" of quantum theory from information theoretic principles, specifically those which work within the environment of general probabilistic theories. For example:
http://arxiv.org/abs/1011.6451 ; http://arxiv.org/abs/1004.1483 ; http://arxiv.org/abs/quantph/0101012
The above links, despite having perhaps overly strong titles, are fairly clear about what assumptions are made, and what is derived. These assumptions are more than simply uncertainty and robust reproducibility: e.g. one assumption that is made by all the above links is that any two pure states are linked by a reversible transformation (in the first link, a slightly modified version of this is assumed). Of course, "pure state" and "reversible transformation" are well-defined concepts within the general probabilistic framework which generalize the meaning of the terms in quantum theory.
Since this research is closely related to my PhD, I feel compelled to give an answer your questions about uncertainty relations and complex numbers in this context. General probabilistic theories provide an abstracted formalism for discussing experiments in terms of measurement choices and outcomes. Essentially any physical theory that predicts probabilities for experimental outcomes (a "prediction calculus" if you like) occupies a place within that formalism, including the complex Hilbert space paradigm of quantum theory. The idea is to whittle down, by means of minimal reasonabe assumptions, the full class of general probabilistic theories until one ends up with the theory that corresponds to quantum theory. What you then have is a prediction calculus equivalent to that of complex Hilbert space quantum theory. In short, complex numbers aren't directly derived from the assumptions; rather, they can be seen simply as part of a less intuitive representation of the same prediction calculus. Uncertainty relations can of course be deduced from the general probabilistic theory if desired, but since they are not part of the actual postulates of quantum theory, there hasn't been much point in doing so. It bears mentioning that this "whittling down process" has so far been achieved only for finite-dimensional quantum theory, as far as I'm aware, although there is work being done on the infinite-dimensional case.
I actually thought of physics as an example of this, for quantum interpretations: you sometimes see claims that MWI is an absurd theory pushed by a few fringe physicists and popularizers and cranks, or alternately, that every good physicist takes MWI seriously. What do the occasional small surveys reveal? Something in between: a minority or perhaps plurality holding to MWI with agnosticism on the part of many - MWI being now a respectable position to hold but far from dominant or having won.
I am reminded of a series of documents uploaded to the arxiv earlier this year, each one reporting the results of a survey taken at a distinct conference, and supposedly revealing a "snapshot" of the participants' atitudes towards foundational issues (such as interpretations). Although the first document seems to be making some fairly strong claims about academic consensus, the following two are a little more conservative. The final one says something very similar to the original post here; their results suggest that,
'there exist, within the broad field of "quantum foundations", sub-communities with quite different views, and that (relatedly) there is probably even significantly more controversy about several fundamental issues than the already-significant amount revealed in the earlier poll.'
http://arxiv.org/abs/1301.1069
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't have the expertise to evaluate it, but Brian Greene suggests this experiment.
That experiment sounds very problematic to me. He says "After you measure the electron’s spin about the x-axis, have someone fully reverse the physical evolution.... Such reversal would be applied to everything: the electron, the equipment, and anything else that’s part of the experiment.".
There is no explanation of the mechanics of how he thinks such a time-reversal could be implemented. We simply don't have the fine control over the quantum state of the entire measurement apparatus. In fact, the very assumption that quantum theory is even the true/applicable state of affairs at this macro scale is the kind of thing that many Copenhagenists dispute.
Conversely, if it were possible to have such a fine control over the entire system including the very equipment used to perform the measurement, well then, you might as well simply make a quantum measurement of the larger quantum system which includes that apparatus! There would be different outcomes depending on whether collapse has or has not yet occured.
It seems like whether or not this experiment even makes sense relies somewhat on whether MWI is true. Ultimately I think the very description of this experiment makes hidden assumptions, which beg the same question it is trying to answer.