"The balance of arguments is overwhelmingly tipped; and physicists who deny it, are making specific errors of probability theory (which I have specifically laid out, and shown to you)"
I guess this refers to the error of supposing that Occam's Razor literally means "have as few entities as possible", rather than "have a theory as simple as possible", and opposing Many Worlds for that reason. Which is indeed an error.
But perhaps for the last time, I will try to enumerate those problems with your position that I can remember.
1. There is no relativistic formulation of Many Worlds; you just trust that there is.
2. There is no derivation of the Born probabilities, which contain all the predictive content of quantum mechanics.
3. Robin Hanson has a proposal to derive the probabilities, but for now it rests on making vagueness about the concept of observers and worlds into a virtue.
You've given zero public consideration to other possibilities such as temporally bidirectional causation and nonsubjective collapse theories. You've also ignored Bohmian mechanics, a classically objective theory which does make all the predictions of quantum theory. You also haven't said anything about the one version of Many Worlds which does produce predictions - the version Gell-Mann favors, "consistent histories" - which has a distinctly different flavor to the "waves in configuration space" version.
In view of all that, how can you possibly say that Many Worlds is rationally favored, or that you have made a compelling case for this?
I'll repeat my earlier recommendation:
"What you should say as a neo-rationalist is that ... people should not be content with an incomplete description of the world, and that something like Minimum Description Length should be used to select between possible complete theories when there is nothing better, and you should leave it at that."
I wrote a little essay at Nick Tarleton's forum, here, about these problems. I will at some point link from there to my various comments posted here, so it's all in the one place. And I suppose eventually I'll have to write my own views out at length (not just my anti-MWI views). My main unexpressed view is that string theory is probably the answer, and that attempts to make ontological sense of physics will have to grapple with its details, and so all these other 'interpretations' are merely preliminary ideas that may at best be helpful in the real struggle.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Stephen: consistent histories works by having a set of disjoint, coarse-grained histories - "coarse-grained" meaning that they are underspecified by classical standards - which then obtain a-priori probabilities through the use of a "decoherence functional" (which is where stuff like the Hamiltonian, that actually defines the theory, enters). You then get the transition probabilities of ordinary quantum mechanics by conditioning on those global probabilities of whole histories.
Some people have a neo-Copenhagenist attitude towards consistent histories - i.e., it's just a formalism - but if you take it seriously as a depiction of an actually existing ensemble of worlds, it's quite different from the more Parmenidean vision offered here, in which reality is a standing wave in configuration space, and "worlds" (and, therefore, observers) are just fuzzily defined substructures of that standing wave. The worlds in a realist consistent-histories interpretation would be sharply defined and noninteracting.
There is certainly a relation between the two possible versions of Many Worlds, in that you can construct a decoherence functional out of a wavefunction of the universe, and derive the probabilities of the coarse-grained histories from it. In effect, each history correponds to a chunk of configuration space, and the total probability of that history comes from the amplitudes occupying that chunk. (The histories do not need to cover all of configuration space; they only need to be disjoint.) ... I really need some terminology here. I'm going to call one type Parmenidean, and the other type Lewisian, after David Lewis, the philosopher who talked about causally disjoint multiple worlds. So: you can get a Lewisian theory of many worlds from a Parmenidean theory by breaking off chunks of the Parmenidean "block multiverse" and saying that those are the worlds. I can imagine a debate between a Parmenidean and a Lewisian, in which a Parmenidean would claim that their approach is superior because they regard all the possible Lewisian decompositions as equally partially real, whereas the Lewisian might argue that their approach is superior because there's no futzing around about what a "world" is - the worlds are clearly (albeit arbitrarily) defined.
But the really significant thing is that you can get the numerical quantum predictions from the "Lewisian" approach, but you can't get it from the Parmenidean. Robin Hanson's mangled worlds formula gets results by starting down the road towards a Lewisian specification of exactly what the worlds are, but he gets the right count in a certain limit without having to exactly specify when one world becomes two (or many). Anyway, the point is not that consistent histories makes different predictions, but that it makes predictions at all.