If you suppose that branching ought to act like probability, then the Born rule follows directly (as pointed out by Born himself in the original paper and reproduced here by me several times). This is not the challenge for MWI. The problem is getting from Wavefunction realism to the notion that we ought to treat branching like probability with any sort of function at all.
Luke, please correct me if I'm misunderstanding something.
The rule follows directly if you require that the wavefunction behaves like a "vector probability". Then you look for a measure that behaves like probability should (basically, nonnegative and adding up to 1). And you find that for this the wavefunction should be complex-valued and the probability should be its squared amplitude. You can also show that anything "larger" than complex numbers (e.g. quaternions) will not work.
But, as you said, the question is not how to derive the Born rule from "vector probability", but rather why would we make the connection of wavefunction with probability in the first place (and why the former should be vector rather than scalar). And in this respect I find the exposition that starts from probability and gets to the wavefunction very valuable.
1 would disappoint me. 2 would surprise me but (for reasons resembling yours) not astonish me. 3 would be the best case and I'd be interested to know what assumptions. (The boundary between 2 and 3 is fuzzy. A nonrelativistic universe with electromagnetism like ours has problems; should "electromagnetism like ours" be considered part of "the very idea" or a further "assumption"?) 4 and 5 would be very interesting but (kinda obviously) I don't currently see how either would work.
I certainly would not rule out number 5 ;) As for 3, the arguments seem to apply to any universe in which you can carry out a reproducible experiment. However, in a "classical universe" everything is, in principle, exactly knowable, and so you just don't need a probabilistic description.
Unless there is limited information, in which case you use statistical mechanics. With perfect information you know which microstate the system is in, the evolution is deterministic, there is no entropy (macrostate concept), hence no second law, etc. Only when you have imperfect information -- an ensemble of possible microstates, a macrostate -- mechanics "becomes" statistical.
Using probabilistic logic in a situation where classical logic applies is either overkill or underconfidence.
Here is the Houston LW Google Group: https://groups.google.com/forum/?forum/houston-lesswrong#!forum/houston-lesswrong
I'm not so sure that this is actually true. It has been shown that, given a fairly minimal set of constraints that don't mention probability, decision-makers in a MWI setting maximise expected utility, where the expectation is given with respect to the Born rule: http://arxiv.org/abs/0906.2718
Can this argument be summarized in some condensed form? The paper is long.
As far as I can tell, it's highly misleading for laymen. The postulates, as verbally described ("reproducible" is the worst offender by far), look generic and innocent - like something you'd reasonably expect of any universe you could figure out - but as mathematically introduced, they constrain the possible universes far more severely than their verbal description would.
In particular, one could have an universe where the randomness arises from the fine position of the sensor - you detect the particle if some form of binary hash of the bitstring of the position of the sensor is 1, and don't detect when the hash is 0. The experiments in that universe look like reproducible probability of detecting the particle, rather than non-reproducible (due to sensitivity to position) detection of particle. Thus "reproducible" does not constrain us to the universes where the experiments are non-sensitive to small changes.
I'm not sure I understood you well, could you please elaborate? If the triggering of detectors depends only on the (known) positions of detectors then it seems your experiment should be well describable by classical logic.
if we have another kind of system - let's call it bayesianism - and we have a reason to believe this other kind of system corresponds better to reality even though it doesn't rely perfectly on testing and experimenting, would you reject that in favor of physics? Why?
Replace "bayesianism" with "Christianity" in the above and answer your own question.
The moment a model of the world becomes disconnected from "testing and experimenting" it becomes a faith (or math, if you are lucky).
I guess one could argue that "bayesianism" (probability-as-logic) is testable practically and, indeed, well-tested by now. (But I still don't understand how raisin proposes to reject physics in favor of probability theory or vice versa.)
The Born rule that is so puzzling for MWI results from the particular mathematical form of this functional substitution.
It's not MORE puzzling in MWI. It's just that under MWI you have enough of a reason to suspect that it ought to be the case that you're posed with the puzzle of whether you actually have enough to prove it. Under not-MWI, you have to import it whole cloth, which may feel less puzzling since we aren't so close to an answer.
~~~~
I find this an interesting notion, but I'm not sure quite what it means. This isn't an ontology. It provides no mechanism that would justify the relevance of its assumptions.
I'm not sure "not-MWI" is a single coherent interpretation :) Under Copenhagen, for example, the Born rule has to be postulated. The present paper
does not support the Copenhagen interpretation (in any form)
MWI also postulates it, see V_V's comment.
As for the paper's assumptions, they seem to be no different than the assumptions of normal probabilistic reasoning as laid out by Cox/Polya/Jaynes/etc., with all that ensues in regard to relevance.
(edit: formatting)
What do you think of Mitchell_Porter's comments on the other article discussing this paper?
In short, they mostly seem far-fetched to me, probably due to a superficial reading of the paper (as Mitchell_Porter admits). For example:
I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information"
The Fisher information in this paper arises automatically at some point and is only noted in passing. There is no more derivation from Fisher information as there is from the wavefunction.
they describe something vaguely like an EPR experiment ... a similarly abstracted description of a Stern-Gerlach experiment
The vagueness and abstraction are required to (1) precisely define the terms (2) under the most general conditions possible, i.e., the minimum information sufficient to define the problem. This is completely in line with Jaynes' logic that the prior should include all the information that we have and no other information (the maximum entropy principle). If you have some more concrete information about the specific instance of Stern-Gerlach experiment you are running then by all means you should include it in your probability assignment.
They make many appeals to symmetry, e.g. ... that the experiment will behave the same regardless of orientation. Or ... translational invariance.
Again, a reader who is familiar with Jaynes will immediately recognize here the principle of transformation groups (extension of principle of indifference). If nothing about the problem changes upon translation/rotation then this fact must be reflected in the probability distribution.
hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper
- in fact this is what I was trying to do here.
Downvoted for reposting yet another untestable QM foundations paper, under a misleading title (there is nothing "common-sense" about QM).
In quantum physics, MWI does quite naturally resolve some difficult issues in the "wavefunction-centristic" view. However, we see that the concept wavefunction is not really central for quantum mechanics. This removes the whole problem of wavefunction collapse that MWI seeks to resolve.
Physical theories live and die by testing (or they ought to, unless they happen to be pushed by famous string theorists). I agree that "This removes the whole problem of wavefunction collapse", but only in the minds of philosophers of physics and some misguided philosophically inclined physicists. This paper adds nothing to physics.
Thank you. The title plays on the idea of deriving quantum mechanics from the rules of "common-sense" probabilistic reasoning. Suggestions for a better title are, of course, welcome.
In my view this is not so much "QM foundations" or "adding to physics" (one could argue it takes away from physics) as it is an interesting application of Bayesian inference, providing another example of its power. It is however interesting to discuss it in the context of MWI which is a relatively big thing for some here on Less Wrong.
Regarding testability I'm reminded of the recent discussion at Scott Aaronson's blog: http://www.scottaaronson.com/blog/?p=1653
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm not sure that the proof can be summarised in a comment, but the theorem can:
Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your 'world' to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:
Some technical ones about which unitary transformations are available.
Your preferences should be a total ordering on the set of the available unitary transformations.
If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V' available, and you know that you will later prefer V to V', then you should currently prefer (U and then V) to (U and then V').
If there are two microstates that give rise to the same macrostate, you don't care about which one you end up in.
You don't care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.
You only care about which state the universe ends up in.
If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.
Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born 'probability' of each branch.
Thanks! The list of assumptions seems longer than in the De Raedt et al. paper and you need to first postulate branching and unitarity (let's set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.