This article should really be called "Patching the argumentative flaw in the Sequences created by the Quantum Physics Sequence".
There's only one big thing wrong with that Sequence: the central factual claim is wrong. I don't mean the claim that the Many Worlds interpretation is correct; I mean the claim that the Many Worlds interpretation is obviously correct. I don't agree with the ontological claim either, but I especially don't agree with the epistemological claim. It's a strawman which reduces the quantum debate to Everett versus Bohr - well, it's not really Bohr, since Bohr didn't believe wavefunctions were physical entities. Everett versus Collapse, then.
I've complained about this from the beginning, simply because I've also studied the topic and profoundly disagree with Eliezer's assessment. What I would like to see discussed on this occasion is not the physics, but rather how to patch the arguments in the Sequences that depend on this wrong sub-argument. To my eyes, this is a highly visible flaw, but it's not a deep one. It's a detail, a bug. Surely it affects nothing of substance.
However, before I proceed, I'd better back up my criticism. So: consider the existence of single-world retrocausal interpretations of quantum mechanics, such as John Cramer's transactional interpretation, which is descended from Wheeler-Feynman absorber theory. There are no superpositions, only causal chains running forward in time and backward in time. The calculus of complex-valued probability amplitudes is supposed to arise from this.
The existence of the retrocausal tradition already shows that the debate has been represented incorrectly; it should at least be Everett versus Bohr versus Cramer. I would also argue that when you look at the details, many-worlds has no discernible edge over single-world retrocausality:
- Relativity isn't an issue for the transactional interpretation: causality forwards and causality backwards are both local, it's the existence of loops in time which create the appearance of nonlocality.
- Retrocausal interpretations don't have an exact derivation of the Born rule, but neither does many-worlds.
- Many-worlds finds hope of such a derivation in a property of the quantum formalism: the resemblance of density matrix entries to probabilities. But single-world retrocausality finds such hope too: the Born probabilities can be obtained from the product of ψ with ψ*, its complex conjugate, and ψ* is the time reverse of ψ.
- Loops in time just fundamentally bug some people, but splitting worlds have the same effect on others.
I am not especially an advocate of retrocausal interpretations. They are among the possibilities; they deserve consideration and they get it. Retrocausality may or may not be an element of the real explanation of why quantum mechanics works. Progress towards the discovery of the truth requires exploration on many fronts, that's happening, we'll get there eventually. I have focused on retrocausal interpretations here just because they offer the clearest evidence that the big picture offered by the Sequence is wrong.
It's hopeless to suggest rewriting the Sequence, I don't think that would be a good use of anyone's time. But what I would like to have, is a clear idea of the role that "the winner is ... Many Worlds!" plays in the overall flow of argument, in the great meta-sequence that is Less Wrong's foundational text; and I would also like to have a clear idea of how to patch the argument, so that it routes around this flaw.
In the wiki, it states that "Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism, naturalism, and philosophy of science." So there we have it - a synopsis of the function that this Sequence is supposed to perform. Perhaps we need a working group that will identify each of the individual arguments, and come up with a substitute for each one.
I address this question of ontology in my book, and I strongly suggest you take a look at that. (I know the book is a bit pricey, but you can always get it from a library! ;)
But here's a reply in a nutshell.
First, the whole point of PTI is the idea that QM describes REAL possibilitites that do not live in spacetime -- i.e., that spacetime is not 'all there is'. So the QM objects DO exist, in my interpretation. That's the basic ontology. The mathematical object that describes these real possibilitites is Hilbert space. Again: 'what exists' is not the same as 'what is in spacetime'. Not being in spacetime does not disqualify an entity from existing. This is where I think 'mainstream' attempts to interpret QM stumble, because they automatically assume that because the quantum state (or 'wavefunction') does not live in spacetime, it therefore necessarily describes something that 'doesn't physically exist', i.e., it only describes knowledge. I think that's a false choice. Remember that it was Heisenberg who first suggested that QM states describe a previously unsuspected kind of physical reality. That's the idea I'm pursuing.
There are no 'particles' in TI or PTI. So at a basic level, it is interacting field currents that are fundamental. These are the physical possibilitites.
As for the actual events, these comprise a discretized spacetime corresponding to the transactions that have been actualized. This is a definite history of energy exchanges between emitters and absorbers, and is the emergent 'classical' world. I invite you to Chapter 8 of my book for further details. A specific example of the emergence of a 'classical' trajectory is given at the end of Chapter 4.
Again, the main point: 'physically real' is not equivalent to 'existing in spacetime'. Quantum states describe physically real possibilitites that do not live in spacetime, but have their existence in a realm mathematically described by Hilbert (actually Fock) space. Spacetime is just the set of actualized events -- i.e. emitters and absorbers that have exchanged energy via actualized transactions.Each of these defines an invariant spacetime interval. But note that this is a relational view of spacetime -- the latter is not a substantive, independently existing structure. It's just a map we use to describe the set of actualized events.
To address your final question directly: the things that can be actualized are described by the weighted projection operators in the von Neumann mixed state ([process 1') occurring on measurement-- the weights are just the Born Rule. (TI is the only interpretation that can physically explain this 'measurement' transformation.) The thing that is actualized is described by the projection operator 'left standing' while the other ones have disappeared. These are 'just' properties, if you like, but they are supported (as a substratum) by the emitter and absorber involved in their actualization. So in PTI, the spacetime arena of phenomena, i.e., observed properties, is rooted in a pre-spacetime substratum of physical possibilitites.
After some puzzlement (because it is so unlike what I expected), I think I now understand your interpretation. Possibilist TI is essentially a growing block universe which consists of a set of state vectors with a timelike partial order (a little like this), and the growth is a stochastic feeling out of immediate future extensions of this poset, via potential transactions.
For various reasons I can't believe in that as a final ontology, but I can imagine that it would have heuristic value, and maybe even practical value, for people trying to understand the nature of time and causal dependency in a universe containing backward as well as forward causality.