Thank you to Justis Mills for proofreading and feedback. This post can also be found on my substack.

I mentioned that I disagree with the many worlds interpretation of quantum mechanics in a comment, and I thought I should clarify my position. I title the post "ackshually" because it is a very pedantic objection that I don't think is very important. But I found the philosophy interesting/enlightening when I thought of it.

The TL;DR is that many-worlds theory is a way to embed a quantum system into a non-quantum system, whereas it seems more natural to assume that the world is just Inherently Quantum. To understand what I mean by "embedding a quantum system", it may be clearest to start with an analogy to stochasticity.

The many-worlds theory of stochasticity

To me, the notion of "true randomness" is philosophically coherent. Like yes, it's conceivable that we happen to live in a deterministic universe, but I don't think it's objectionable for the True Generator Of Physics to be a nondeterministic Markov Chain or whatever.

How can we model randomness? One option is sampling. If your modelling tool supports randomness, then you can turn the randomness of the system you're modelling into randomness in the tool itself, for instance in programming you could define an interface like such:

trait Distribution[A] {
    def sample(): A
}

val uniform = new Distribution[Double] {
  def sample() =
    /* somehow produce a stochastic number in [0, 1], using True Randomness */
}

... with the property that each time you call uniform.sample(), you get a new random number.[1]

Sampling is not the way randomness is usually modelled in mathematics, partly because mathematics is deterministic and so you can't model randomness in this way. Instead, it is usually modelled using probability, which in the finite setting we can think of as a function . This means that to each possible value , we have a real number  quantifying the "realness" of this .

Now, what happens if we take  literally? It seems like it is postulating "many worlds" of , with quantifiable levels of realness. This isn't true if the universe is truly stochastic. It's also not true if the  is modelling uncertainty. One could perhaps say it's sort of true if the  is modelling propensities, but even then it's sort of a stretch. Maybe the place where it's most true is in frequentism, where the  is modelling long-run frequencies.

Pros and cons of the many-worlds theory of quantum mechanics

Quantum mechanics is sort of like stochasticity, so if I'm to feel like the universe can be Inherently Stochastic, it also seems like I should feel like the universe can be Inherently Quantum. I used to think the many-worlds interpretation of quantum mechanics was literally just that, but the comparison to the many-worlds interpretation of stochasticity makes me think it is not. The main mathematical difference is that we swapped out  and .

The many-worlds theory of quantum mechanics says that the wavefunction simply is the objective underlying reality, much like the many-worlds theory of stochasticity says that the probability function simply is the objective underlying reality. But a wavefunction is just a way to embed any quantum system into a deterministic system, so that seems like an assumption that the universe is Inherently Deterministic, rather than Inherently Quantum.

On the other hand, the possibility of destructive interference introduces a strong distinction between quantum mechanics and stochasticity, so maybe one could say that True Stochasticity is conceivable in a way that True Quantum Mechanics are not. That is, under True Stochasticity, after you sample one value from the nondeterministic dynamics, the other potential samples have no effect on what happens afterwards, whereas there is a sense in which this is not true for quantum mechanics. (On the other hand, the principle of superposition is a sense in which it is true...) So I could see the point in wanting to embed True Quantum Mechanics in a way that one wouldn't want to embed other systems.

Embedding Quantum Mechanics using wavefunctions also introduces confusion around the Born probabilities. This becomes clear with the example of True Stochasticity vs The Many-Worlds Theory of Stochasticity:

A Truly Stochastic System has a built-in weighting of realness, as there is one real outcome which depends on the probabilities. Meanwhile, if you evolve a probability mass function over time, the relationship between the numbers and the realness is kind of weakened. For instance if you store it as a hash map which maps outcomes with nonzero probability to their probabilities, then computationally, all possible outcomes are equally real, and the probabilities are just epiphenomenal tags. (This is basically quantum immortality, but for the many-worlds theory of stochasticity.) But this is not the only way to store it, and assuming you weight "true realness" by the number of different computations that result in a given state, different representations could yield just about any distribution of realness.

Collapse interpretation as an embedding of quantum mechanics into stochastic foundations

If the simplest assumption is that the world is just quantum mechanical, and the many-worlds interpretation is the assumption that the world is deterministic, then the collapse interpretation is the assumption that the world is Truly Stochastic.

That is, the collapse postulate is a way of turning wavefunctions into randomness, and when interpreted in a realist way, it is interpreted as occurring stochastically. Given that the world is quantum mechanical, the collapse postulate is arbitrary, with variants of it being continually falsified as quantum computers prove larger and larger superpositions to be stable.

The main advantage of the collapse interpretation is that it provides a bridge rule, where if the rest of your model is a first-person stochastic model, then you can embed third-person quantum-mechanical models into it. Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience, because nobody uses a third-person quantum-mechanical model for navigating their day-to-day life.

I think lots of many-worlds theorists actually agree with this?

Like, the point of many-worlds theory in practice isn't to postulate that we should go further away from quantum mechanics by assuming that everything is secretly deterministic. It's that we should go closer to quantum mechanics by assuming that postulates like "collapse" are mathematical hacks to embed the true quantum mechanical systems into our false models.

Many-worlds theory is "directionally correct" in this sense, but multiple incompatible theories can be "directionally correct" in the same sense, and theories that try to address different things can be directionally correct for different things. (E.g. maybe the pilot-wave model is directionally correct in the sense of informing us about the nature of knowledge?)

  1. ^

    In practice, most software libraries that use randomness use pseudo-random number generators, which would make it a hidden-variable model rather than a Truly Stochastic model. But let's pretend there's an exception, somehow.

New Comment
42 comments, sorted by Click to highlight new comments since:
[-]gjm4030

I think this claim is both key to OP's argument and importantly wrong:

But a wavefunction is just a way to embed any quantum system into a deterministic system

(the idea being that a wavefunction is just like a probability distribution, and treating the wavefunction as real is like treating the probability distribution of some perhaps-truly-stochastic thing as real).

The wavefunction in quantum mechanics is not like the probability distribution of (say) where a dart lands when you throw it at a dartboard. (In some but not all imaginable Truly Stochastic worlds, perhaps it's like the probability distribution of the whole state of the universe, but OP's intuition-pumping example seems to be imagining a case where A is some small bit of the universe.)

The reason why it's not like that is that the laws describing the evolution of the system explicitly refer to what's in the wavefunction. We don't have any way to understand and describe what a quantum universe does other than in terms of the evolution of the wavefunction or something basically equivalent thereto.

Which, to my mind, makes it pretty weird to say that postulating that the wavefunction is what's real is "going further away from quantum mechanics". Maybe one day we'll discover some better way to think about quantum mechanics that makes that so, but for now I don't think we have a better notion of what being Truly Quantum means than to say "it's that thing that wavefunctions do".

I have the impression -- which may well be very unfair -- that at some early stage OP imbibed the idea that what "quantum" fundamentally means is something very like "random", so that a system that's deterministic is ipso facto less "quantum" than a system that's stochastic. But that seems wrong to me. We don't presently have any way to distinguish random from deterministic versions of quantum physics; randomness or something very like it shows up in our experience of quantum phenomena, but the fact that a many-worlds interpretation is workable at all means that that doesn't tell us much about whether randomness is essential to quantum-ness.

So I don't buy the claim that treating the wavefunction as real is a sort of deterministicating hack that moves us further away from a Truly Quantum understanding of the universe.

(And, incidentally, if we had a model of Truly Stochastic physics in which the evolution of the system is driven by what's inside those probability distributions -- why, then, I would rather like the idea of claiming that the probability distributions are what's real, rather than just their outcomes.)

[-]Ben60

Something you and the OP might find interesting is one of those things that is basically equivalent to a wavefunction, but represented in different mathematics is a Wigner function. It behaves almost exactly like a classical probability distribution, for example it integrates up to 1. Bayes rule updates it when you measure stuff. However, in order for it to "do quantum physics" it needs the ability to have small negative patches. So quantum physics can be modelled as a random stochastic process, if negative probabilities are allowed. (Incidentally, this is often used as a test of "quantumness": do I need negative probabilities to model it with local stochastic stuff? If yes, then it is quantum).

If you are interested in a sketch of the maths. Take W to be a completely normal probability distribution, describing what you know about some isolated, classical ,1d system. And take H to be the classical Hamiltonian (IE just a function for the system's energy). Then, the correct way of evolving your probability distribution (for an isolated classical, 1D system) is:


Where the arrows on the derivatives have the obvious effect of firing them either at H or W. The first pair of derivatives in the bracket is Newton's Second law (rate of change of Energy (H) with respect to X is going to turn potential's into Forces, and the rate of change with momentum on W then changes the momentum in proportion to the force), the second term is the definition of momentum (position changes are proportional to momentum).

Instead of going to operators and wavefunctions in Hilbert space, it is possible to do quantum physics by replacing the previous equation with:

Where sin is understood from Taylor series, so the first term (after the hbars/2 cancel) is the same as the first term for classical physics. The higher order terms (where the hbars do not fully cancel) can result in W becoming negative in places even if it was initially all-positive. Which means that W is no longer exactly like a probability distribution, but is some similar but different animal. Just to mess with us the negative patches never get big enough or deep enough for any measurement we can make (limited by uncertainty principle) to have a negative probability of any observable outcome. H is still just a normal function of energy here.

(Wikipedia is terrible for this topic. Way too much maths stuff for my taste: https://en.wikipedia.org/wiki/Moyal_bracket)

Also, the OP is largely correct when they say "destructive interference is the only issue". However, in the language of probability distributions dealing with that involves the negative probabilities above. And once they go negative they are not proper probabilities any more, but some new creature. This, for example, stops us from thinking of them as just our ignorance. (Although they certainly include our ignorance).

Neat!

I'd expect Wigner functions to be less ontologically fundamental than wavefunctions because a wavefunction into a real function in this way introduces a ton of redundant parameters, since now it's a function of phase space instead of configuration space. But they're still pretty cool.

[-]Ben20

Imagine you have a machine that flicks a classical coin and then makes either one wavefunction or another based on the coin toss. Your ordinary ignorance of the coin toss, and the quantum stuff with the wavefunction can be rolled together into an object called a density matrix.

There is a one-to-one mapping between density matrices and Wigner functions. So, in fact there are zero redundant parameters when using Wigner functions. In this sense they do one-better than wavefunctions, where the global phase of the universe is a redundant variable. (Density matrices also don't have global phase.)

That is not to say there are no issues at all with assuming that Wigner functions are ontologically fundamental. For one, while Wigner functions work great for continuous variables (eg. position, momentum), Wigner functions for discrete variables (eg. Qubits, or spin) are a mess. The normal approach can only deal with discrete systems in a prime number of dimensions (IE a particle with 3 possible spin states is fine, but 6 is not.). If the number of dimensions is not prime weird extra tricks are needed.

A second issue is that the Wigner function, being equivalent to a density matrix, combines both quantum stuff and the ignorance of the observer into one object. But the ignorance of the observer should be left behind if we were trying to raise it to being ontologically fundamental, which would require some change.

Another issue with "ontologising" the Wigner function is that you need some kind of idea of what those negatives "really mean". I spent some time thinking about "If the many worlds interpretation comes from ontologising the wavefunction, what comes from doing that to the Wigner function?" a few years ago. I never got anywhere.

Another issue with "ontologising" the Wigner function is that you need some kind of idea of what those negatives "really mean". I spent some time thinking about "If the many worlds interpretation comes from ontologising the wavefunction, what comes from doing that to the Wigner function?" a few years ago. I never got anywhere.

Wouldn't it also be many worlds, just with a richer set of worlds? Because with wavefunctions, your basis has to pick between conjugate pairs of variables, so your "worlds" can't e.g. have both positions and momentums, whereas Wigner functions tensor the conjugate pairs together, so their worlds contain both positions and momentums in one.

In some but not all imaginable Truly Stochastic worlds, perhaps it's like the probability distribution of the whole state of the universe, but OP's intuition-pumping example seems to be imagining a case where A is some small bit of the universe.

Oops, I guess I missed this part when reading your comment. No, I meant for A to refer to the whole configuration of the universe.

[-]gjm31

Then it seems unfortunate that you illustrated it with a single example, in which A was a single (uniformly distributed)  number between 0 and 1.

But it's a generic type; A could be anything. I had the functional programming mindset where it was to be expected that the Distribution type would be composed into more complex distributions.

The wavefunction in quantum mechanics is not like the probability distribution of (say) where a dart lands when you throw it at a dartboard. (In some but not all imaginable Truly Stochastic worlds, perhaps it's like the probability distribution of the whole state of the universe, but OP's intuition-pumping example seems to be imagining a case where A is some small bit of the universe.)

The reason why it's not like that is that the laws describing the evolution of the system explicitly refer to what's in the wavefunction. We don't have any way to understand and describe what a quantum universe does other than in terms of the evolution of the wavefunction or something basically equivalent thereto.

In my view, the big similarity is in principle of superposition. The evolution of the system in a sense may depend on the wavefunction, but it is an extremely rigid sense which requires it to be invariant to chopping up a superposition to a bunch of independent pieces, or chopping up a simple state into an extremely pathological superposition.

I have the impression -- which may well be very unfair -- that at some early stage OP imbibed the idea that what "quantum" fundamentally means is something very like "random", so that a system that's deterministic is ipso facto less "quantum" than a system that's stochastic. But that seems wrong to me. We don't presently have any way to distinguish random from deterministic versions of quantum physics; randomness or something very like it shows up in our experience of quantum phenomena, but the fact that a many-worlds interpretation is workable at all means that that doesn't tell us much about whether randomness is essential to quantum-ness.

It's worth emphasizing that the OP isn't really how I originally thought of QM. One of my earliest memories was of my dad explaining quantum collapse to me, and me reinventing decoherence by asking why it couldn't just be that you got entangled with the thing you were observing. It's only now, years later, that I've come to take issue with QM.

In my mind, there's four things that strongly distinguish QM systems from ordinary stochastic systems:

  • Destructive interference
  • Principle of least action (you could in principle have this and the next in deterministic/stochastic systems, but it doesn't fall out of the structure the ontology as easily, without additional laws)
  • Preservation of information (though of course since the universe is actually quantum, this means the universe doesn't resemble a deterministic or stochastic system at the large scale, because we have thermodynamics and neither deterministic nor stochastic systems need thermodynamics)
  • Pauli exclusion principle (technically you could have this in a stochastic system too, but it feels quantum-mechanical because it can be derived from fermion products being anti-symmetric, and anti-symmetry only makes sense in quantum systems)

Almost certainly this isn't complete, since I'm mostly autodidact (got taught a bit by my dad, read standard rationalist intros to quantum, like The Sequences and Scott Aaronson, took a mathematical physics course, and coded a few qubit simulations, binged some Wikipedia and Youtube). Of these, only destructive interference really seems like an obstacle, and only a mild one.

(And, incidentally, if we had a model of Truly Stochastic physics in which the evolution of the system is driven by what's inside those probability distributions -- why, then, I would rather like the idea of claiming that the probability distributions are what's real, rather than just their outcomes.)

I would say this is cruxy for me, in the sense that if I didn't believe Truly Stochastic systems were ontologically fine, then I would take similar issue with Truly Quantum systems.

(Warning that I may well be misunderstanding this post.)

For any well-controlled isolated system, if it starts in a state , then at a later time it will be in state  where U is a certain deterministic unitary operator. So far this is indisputable—you can do quantum state tomography, you can measure the interference effects, etc. Right?

OK, so then you say: “Well, a very big well-controlled isolated system could be a box with my friend Harry and his cat in it, and if the same principle holds, then there will be deterministic unitary evolution from  into , and hey, I just did the math and it turns out that  will have a 50/50 mix of ‘Harry sees his cat alive’ and ‘Harry sees his cat dead and is sad’.” This is beyond what’s possible to directly experimentally verify, but I think it should be a very strong presumption by extrapolating from the first paragraph. (As you say, “quantum computers prove larger and larger superpositions to be stable”.)

OK, and then we take one more step by saying “Hey what if I’m in the well-controlled isolated system?” (e.g. the “system” in question is the whole universe). From my perspective, it’s implausible and unjustified to do anything besides say that the same principle holds as above: if the universe (including me) starts in a state , then at a later time it will be in state  where U is a deterministic unitary operator.

…And then there’s an indexicality issue, and you need another axiom to resolve it. For example: “as quantum amplitude of a piece of the wavefunction goes to zero, the probability that I will ‘find myself’ in that piece also goes to zero” is one such axiom, and equivalent (it turns out) to the Born rule. It’s another axiom for sure; I just like that particular formulation because it “feels more natural” or something.

I think the place anti-many-worlds-people get off the boat is this last step, because there’s actually two attitudes:

  • My attitude is: there’s a universe following orderly laws, and the universe was there long before there were any people around to observe it, and it will be there long after we’re gone, and the universe happened to spawn people and now we can try to study and understand it.
  • An opposing attitude is: the starting point is my first-person subjective mind, looking out into the universe and making predictions about what I’ll see. So my perspective is special—I need not be troubled by the fact that I claim that there are many-Harrys when Harry’s in the box and I’m outside it, but I also claim that there are not many-me’s when I’m in the box. That’s not inconsistent, because I’m the one generating predictions for myself, so the situation isn’t symmetric. If I see that the cat is dead, then the cat is dead, and if you outside the well-isolated box say “there’s a branch of the wavefunction where you saw that the cat’s alive”, then I’ll say “well, from my perspective, that alleged branch is not ‘real’; it does not ‘exist’”. In other words, when I observed the cat, I “collapsed my wavefunction” by erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction.

I’m really unsympathetic to the second bullet-point attitude, but I don’t think I’ve ever successfully talked somebody out of it, so evidently it’s a pretty deep gap, or at any rate I for one am apparently unable to communicate past it.

maybe the pilot-wave model is directionally correct in the sense of informing us about the nature of knowledge?

FWIW last I heard, nobody has constructed a pilot-wave theory that agrees with quantum field theory (QFT) in general and the standard model of particle physics in particular. The tricky part is that in QFT there’s observable interference between states that have different numbers of particles in them, e.g. a virtual electron can appear then disappear in one branch but not appear at all in another, and those branches have easily-observable interference in collision cross-sections etc. That messes with the pilot-wave formalism, I think. 

FWIW last I heard, nobody has constructed a pilot-wave theory that agrees with quantum field theory (QFT) in general and the standard model of particle physics in particular. The tricky part is that in QFT there’s observable interference between states that have different numbers of particles in them, e.g. a virtual electron can appear then disappear in one branch but not appear at all in another, and those branches have easily-observable interference in collision cross-sections etc. That messes with the pilot-wave formalism, I think. 


Based off the abstracts of these papers:

QFT as pilot-wave theory of particle creation and destruction,

Bohmian Mechanics and Quantum Field Theory,

Relativistically invariant extension of the de Broglie-Bohm theory of quantum mechanics,

Making nonlocal reality compatible with relativity,

Time in relativistic and non relativistic quantum mechanics,
and the Wikipedia page on de Broglie Bohm's section on QFT, it seems like this claim is wrong. I haven't read these papers yet, but someone I was talking to said Bohmian QFT is even more unnecessarily complicated than Bohmian QM.

I don't know if anyone has re-constructed the Standard Model in this framework as of yet.
EDIT: Changed "standard Bohmian QFT" -> "Bohmian QM"

For any well-controlled isolated system, if it starts in a state |Ψ⟩, then at a later time it will be in state U|Ψ⟩ where U is a certain deterministic unitary operator. So far this is indisputable—you can do quantum state tomography, you can measure the interference effects, etc. Right?

It will certainly be mathematically well-described by an expression like that. But when you flip a coin without looking at it, it will also be well-described by a probability distribution 0.5 H + 0.5 T, and this doesn't mean that we insist that after the flip, the coin is Really In That Distribution.

Now it's true that in quantum systems, you can measure a bunch of additional properties that allow you to rule out alternative models. But my OP is more claiming that the wavefunction is a model of the universe, and the actual universe is presumably the disquotation of this, so by construction the wavefunction acts identically to how I'm claiming the universe acts, and therefore these measurements wouldn't be ruling out that the universe works that way.

Or as a thought experiment: say you're considering a simple quantum system with a handful of qubits. It can be described with a wavefunction that assigns each combination of qubit values a complex number. Now say you code up a classical computer to run a quantum simulator, which you do by using a hash map to connect the qubit combos to their amplitudes. The quantum simulator runs in our quantum universe.

Now here's the question: what happens if you have a superposition in the original quantum system? It turns into a tensor product in the universe the quantum computer runs in, because the quantum simulator represents each branch of the wavefunction separately.

This phenomenon, where a superposition within the system gets represented by a product outside of the system, is basically a consequence of modelling the system using wavefunctions. Contrast this to if you were just running a quantum computer with a bunch of qubits, so the superposition in the internal system would map to a superposition in the external system.

I claim that this extra product comes from modelling the system as a wavefunction, and that much of the "many worlds" aspect of the many-worlds interpretation arises from this (since products represent things that both occur, whereas things in superposition are represented with just sums).

OK, so then you say: “Well, a very big well-controlled isolated system could be a box with my friend Harry and his cat in it, and if the same principle holds, then there will be deterministic unitary evolution from |Ψ⟩ into U|Ψ⟩, and hey, I just did the math and it turns out that U|Ψ⟩ will have a 50/50 mix of ‘Harry sees his cat alive’ and ‘Harry sees his cat dead and is sad’.” This is beyond what’s possible to directly experimentally verify, but I think it should be a very strong presumption by extrapolating from the first paragraph. (As you say, “quantum computers prove larger and larger superpositions to be stable”.)

Yes, if you assume the wavefunction is the actual state of the system, rather than a deterministic model of the system, then it automatically follows that something-like-many-worlds must be true.

…And then there’s an indexicality issue, and you need another axiom to resolve it. For example: “as quantum amplitude of a piece of the wavefunction goes to zero, the probability that I will ‘find myself’ in that piece also goes to zero” is one such axiom, and equivalent (it turns out) to the Born rule. It’s another axiom for sure; I just like that particular formulation because it “feels more natural” or something.

Huh, I didn't know this was equivalent to the born rule. It does feel pretty natural, do you have a reference to the proof?

I’m really unsympathetic to the second bullet-point attitude, but I don’t think I’ve ever successfully talked somebody out of it, so evidently it’s a pretty deep gap, or at any rate I for one am apparently unable to communicate past it.

I agree with the former bullet point rather than the latter.

FWIW last I heard, nobody has constructed a pilot-wave theory that agrees with quantum field theory (QFT) in general and the standard model of particle physics in particular. The tricky part is that in QFT there’s observable interference between states that have different numbers of particles in them, e.g. a virtual electron can appear then disappear in one branch but not appear at all in another, and those branches have easily-observable interference in collision cross-sections etc. That messes with the pilot-wave formalism, I think.

Someone in the comments of the last thread claimed maybe some people found out how to generalize pilot-wave to QFT. But I'm not overly attached to that claim; pilot-wave theory is obviously directionally incorrect with respect to the ontology of the universe, and even if it can be forced to work with QFT, I can definitely see how it is in tension with it.

Huh, I didn't know this was equivalent to the born rule. It does feel pretty natural, do you have a reference to the proof?

Wasn't this the assumption originally used by Everret to recover Born statistics in his paper on MWI?

For example: “as quantum amplitude of a piece of the wavefunction goes to zero, the probability that I will ‘find myself’ in that piece also goes to zero”

What I really don't like about this formulation is extreme vagueness of "I will find myself", which implies that there's some preferred future "I" out of many who is defined not only by observations he receives, but also by being a preferred continuation of subjective experience defined by an unknown mechanism.

It can be formalized as the many minds interpretation, incurring additional complexity penalty and undermining surface simplicity of the assumption. Coexistence of infinitely many (measurement operators can produce continuous probability distributions)  threads of subjective experience in a single physical system also doesn't strike me as "feeling more natural".

there's some preferred future "I" out of many who is defined not only by observations he receives, but also by being a preferred continuation of subjective experience defined by an unknown mechanism

I disagree with this part—if Harry does the quantum equivalent of flipping an unbiased coin, then there’s a branch of the universe’s wavefunction in which Harry sees heads and says “gee, isn’t it interesting that I see heads and not tails, I wonder how that works, hmm why did my thread of subjective experience carry me into the heads branch?”, and there’s also a branch of the universe’s wavefunction in which Harry sees tails and says “gee, isn’t it interesting that I see tails and not heads, I wonder how that works, hmm why did my thread of subjective experience carry me into the tails branch?”. I don’t think either of these Harrys is “preferred”.

I don’t think there’s any extra “complexity penalty” associated with the previous paragraph: the previous paragraph is (I claim) just a straightforward description of what would happen if the universe and everything in it (including Harry) always follows the Schrodinger equation—see Quantum Mechanics In Your Face for details.

I think we deeply disagree about the nature of consciousness, but that’s a whole can of worms that I really don’t want to get into in this comment thread.

doesn't strike me as "feeling more natural"

Maybe you’re just going for rhetorical flourish, but my specific suggestion with the words “feels more natural” in the context of my comment was: the axiom “I will find myself in a branch of amplitude approaching 0 with probability approaching 0” “feels more natural” than the axiom “I will find myself in a branch of amplitude c with probability ”. That particular sentence was not a comparison of many-worlds with non-many-worlds, but rather a comparison of two ways to formulate many-worlds. So I think your position is that you find neither of those to “feel natural”.

I haven't fully understood your stance towards the many minds interpretation. Do you find it unnecessary?

I don’t think either of these Harrys is “preferred”.

And simultaneously you think that existence of future Harries who observe events with probabilities approaching zero is not a problem because current Harry will almost never find himself to be those future Harries. I don't understand what it means exactly. 

Harries who observe those rare events exist and they wonder how they found themselves in those unlikely situations. Harries who hadn't found anything unusual exist too. Current Harry became all of those future Harries. 

So, we have a quantum state of the universe that factorizes into states with different Harries. OK. What property distinguished a universe where "Harry found himself in a tails branch" and a universe where "Harry found himself in a heads branch"?

You have already answered it: "I don’t think either of these Harrys is “preferred”." That is there's no property of the universe that distinguishes those outcomes.

Let's get back to the initial question 'What it means that "Harry will almost never find himself to be those future Harries"?' To answer that we need to jump from a single physical Universe (containing multitude of Harries who found themselves in branches of every possible probability) to a single one (or maybe a set) of those Harries and proclaim that, indeed, that Harry (or Harries) found himself in a usual branch of the universe and all other Harries don't matter for some reason (their amplitudes are too low to matter despite them being fully conscious? That's the point that I don't understand).

The many minds interpretation solves this by proposing metaphysical threads of consciousness, thus adding a property that distinguishes  outcomes where Harry observes different things. So we can say that indeed the vast majority of Harries' threads of consciousness ended up in probable branches.

I don't like this interpretation. Why don't we use a single thread of consciousness that adheres to Born rule? Or why don't we get rid of threads of consciousness altogether and just use the Copenhagen interpretation?

So, my question is how you tackle this problem? I hope I've made it sufficiently coherent.

 

My own resolution is that either collapse is objective, or due to imperfect decoherence the vast majority of branches (which also have relatively low amplitude) interfere with each other, making it impossible for conscious beings to exist in them and, consequently, observe them  (it doesn't explain billion quantum coin-flips scenario in my comment below)

I just looked up “many minds” and it’s a little bit like what I wrote here, but described differently in ways that I think I don’t like. (It’s possible that Wikipedia is not doing it justice, or that I’m misunderstanding it.) I think minds are what brains do, and I think brains are macroscopic systems that follow the laws of quantum mechanics just like everything else in the universe.

What property distinguished a universe where "Harry found himself in a tails branch" and a universe where "Harry found himself in a heads branch"?

Those both happen in the same universe. Those Harry's both exist. Maybe you should put aside many-worlds and just think about Parfit’s teletransportation paradox. I think you’re assuming that “thread of subjective experience” is a coherent concept that satisfies all the intuitive properties that we feel like it should have, and I think that the teletransportation paradox is a good illustration that it’s not coherent at all, or at the very least, we should be extraordinarily cautious when making claims about the properties of this alleged thing you call a “thread of subjective experience” or “thread of consciousness”. (See also other Parfit thought experiments along the same lines.)

I don’t like the idea where we talk about what will happen to Harry, as if that has to have a unique answer. Instead I’d rather talk about Harry-moments, where there’s a Harry at a particular time doing particular things and full of memories of what happened in the past. Then there are future Harry-moments. We can go backwards in time from a Harry-moment to a unique (at any given time) past Harry-moment corresponding to it—after all, we can inspect the memories in future-Harry-moment’s head about what past-Harry was doing at that time (assuming there were no weird brain surgeries etc). But we can’t uniquely go in the forward direction: Who’s to say that multiple future-Harry-moments can’t hold true memories of the very same past-Harry-moment?

Here I am, right now, a Steve-moment. I have a lot of direct and indirect evidence of quantum interactions that have happened in the past or are happening right now, as imprinted on my memories, surroundings, and so on. And if you a priori picked some possible property of those interactions that (according to the Born rule) has 1-in-a-googol probability to occur in general, then I would be delighted to bet my life’s savings that this property is not true of my current observations and memories. Obviously that doesn’t mean that it’s literally impossible.

"Thread of subjective experience" was an aside (just one of the mechanisms that explains why we "find ourselves" in a world that behaves according to the Born rule), don't focus too much on it.

The core question is which physical mechanism (everything should be physical, right?) ensures that you almost never will see a string of a billion tails after a billion quantum coin flips, while the universe contains a quantum branch with you looking in astonishment on a string with a billion tails. Why should you expect that it will almost certainly not happen, when there's always a physical instance of you that will see it happened?

You'll have 2^1000000000 branches with exactly the same amplitude. You'll experience every one of them. Which physical mechanism will make it more likely for you to experience strings with roughly the same number of heads and tails?

In the Copenhagen interpretation it's trivial: when the quantum coin flipper writes a result of the flip the universe somehow samples from a probability distribution and the rest is the plain old probability theory. You don't expect to observe a string of a billion tails (or any other preselected string), because you who observes this string almost never exist.

What happens in MWI?

I disagree with this part—if Harry does the quantum equivalent of flipping an unbiased coin, then there’s a branch of the universe’s wavefunction in which Harry sees heads and says “gee, isn’t it interesting that I see heads and not tails, I wonder how that works, hmm why did my thread of subjective experience carry me into the heads branch?”, and there’s also a branch of the universe’s wavefunction in which Harry sees tails and says “gee, isn’t it interesting that I see tails and not heads, I wonder how that works, hmm why did my thread of subjective experience carry me into the tails branch?”. I don’t think either of these Harrys is “preferred”.

This is how it works in MWI without additional postulates. But if you postulate the probability that you will find yourself somewhere, then you are postulating the difference between the case where you have found yourself there, and the case where you haven't. Having a number for how much you prefer something is the whole point of indexical probabilities. And as probability of some future "you" goes to zero, this future "you" goes to not being the continuation of your subjective experience, right? Surely that would make this "you" dispreferred in some sense?

I wrote “flipping an unbiased coin” so that’s 50/50.

Where could I find the proof that “as quantum amplitude of a piece of the wavefunction goes to zero, the probability that I will ‘find myself’ in that piece also goes to zero” is equivalent to the Born rule?

Quantum Mechanics In Your Face talk by Sidney Coleman, starting slide 17 near the end. The basic idea is to try to operationalize how someone might test the Born rule—they take a bunch of quantum measurements, one after another, and they subject their data to a bunch of randomness tests and so on, and then they eventually declare “Born rule seems true” or “Born rule seems false” after analyzing the data. And you can show that the branches in which this person declares “Born rule seems false” have collective amplitude approaching zero, in the limit as their test procedure gets better and better (i.e. as they take more and more measurements).

I was assigned this reading for a class once but only skimmed it - now I wish I'd read it more closely!

Good post, and I basically agree with this. I do think it's good to mostly focus on the experimental implications when talking about these things. When I say "many worlds", what I primarily mean is that I predict that we should never observe a spontaneous collapse, even if we do crazy things like putting conscious observers into superposition, or putting large chunks of the gravitational field into superposition. So if we ever did observe such a spontaneous collapse, that would falsify many worlds.

Sampling is not the way randomness is usually modelled in mathematics, partly because mathematics is deterministic and so you can't model randomness in this way

As a matter of fact, it is modeled this way. To define probability function you need a sample space, from which exactly one outcome is "sampled" in every iteration of probability experiment.

But yes, the math is deterministic, so it's not "true randomness" but a pseudo-randomness, so just like with every software library it's hidden-variables model rather than Truly Stochastic model.

And this is why, I have troubles with the idea of "true randomness" being philosophically coherent. If there is no mathematical way to describe it, in which way can we say that it's coherent?

Like, the point of many-worlds theory in practice isn't to postulate that we should go further away from quantum mechanics by assuming that everything is secretly deterministic.

The point is to describe quantum mechanics as it is. If quantum mechanics is deterministic we want to describe it as deterministic. If quantum mechanics is not deterministic we do not want to descrive quantum mechanic as deterministic. The fact that many-words interpretation describes quantum mechanics is deterministic can be considered "going further from quantum mechanics"  only if it's, in fact, not deterministic, which is not known to be the case. QM just has a vibe of "randomness" and "indeterminism" around it, due to historic reasons, but actually whether it deterministic or not is an open question.

As a matter of fact, it is modeled this way. To define probability function you need a sample space, from which exactly one outcome is "sampled" in every iteration of probability experiment.

No, that's for random variables, but in order to have random variables you first need a probability distribution over the outcome space.

And this is why, I have troubles with the idea of "true randomness" being philosophically coherent. If there is no mathematical way to describe it, in which way can we say that it's coherent?

You could use a mathematical formalism that contains True Randomness, but 1. such formalisms are unwieldy, 2. that's just passing the buck to the one who interprets the formalism.

  1. such formalisms are unwieldy

Do you actually need any other reason to not believe in True Randomness?

  1. that’s just passing the buck to the one who interprets the formalism

Any argument is just passing the buck to the one who interprets the language.

Do you actually need any other reason to not believe in True Randomness?

I think I used to accept this argument, but then came to believe that simplicity of formalisms usually originates from renormalization more than from the simplicity being Literally True?

My read of the post is not that many worlds is wrong, but rather it's not uniquely correct, and that many worlds has some issues of it's own, and that other theories are at least coherent.

Is this a correct reading of this post?

I guess it's hard to answer because it depends on three degrees of freedom:

  • Whether you agree with my assessment that it's mostly arbitrary to demand the fundamental ontology to be deterministic rather than stochastic or quantum,
  • Whether you count "many worlds" as literally asserting that the wavefunction as a classical mathematical object is real or as simply distancing oneself from collapse/hidden variables,
  • Whether you even aim to describe what is ontologically fundamental in the first place.

I'm personally inclined to say the many-worlds interpretation is technically wrong, hence the title. But I have basically suggested people could give different answers to these sorts of degrees of freedom, and so I could see other people having different takeaways.

Why do you speak of deterministic, stochastic, and quantum as three options for a fundamental ontology? In the absence of a measurement/collapse postulate, quantum mechanics is a deterministic theory, and with a collapse postulate, it's a stochastic theory in the sense that the state of the system evolves deterministically except for instantaneous stochastic jumps when "measurements" occur.

Also, what do you mean by "the wavefunction as a classical mathematical object"?

In the absence of a measurement/collapse postulate, quantum mechanics is a deterministic theory

You can make a deterministic theory of stochasticity using many-worlds too.

In the absence of a postulate that the wavefunction is Literally The Underlying State, rather than just a way we describe the system deterministically, quantum dynamics doesn't fit under a deterministic ontology.

Also, what do you mean by "the wavefunction as a classical mathematical object"?

If you have some basis , you can represent quantum systems using functions  (or perhaps more naturally, as  where  denotes the free vector space, but then we get into category theory, and that's a giant nerdsnipe).

Okay, so by "wavefunction as a classical mathematical object" you mean a vector in Hilbert space? In that case, what do you mean by the adjective "classical"?

Okay, so by "wavefunction as a classical mathematical object" you mean a vector in Hilbert space?

Yes.

In that case, what do you mean by the adjective "classical"?

There's a lot of variants of math; e.g. homotopy type theory, abstract stone duality, nonstandard analysis, etc.. Maybe one could make up a variant of math that could embed wavefunctions more natively.

Hmmm, I think there's still some linguistic confusion remaining. While we certainly need to invent new mathematics to describe quantum field theory, are you making the stronger claim that there's something "non-native" about the way that wavefunctions in non-relativistic quantum mechanics are described using functional analysis? Especially since a lot of modern functional analysis theory was motivated by quantum mechanics, I don't see how a new branch of math could describe wavefunctions more natively.

Measure theory and probability theory was developed to describe stochasticity and uncertainty, but they formalize it in many-worlds terms, closely analogous to how the wavefunction is formalized in quantum mechanics. If one takes the wavefunction formalism literally to the point of believing that quantum mechanics must have many worlds, it seems natural to take the probability distribution formalism equally literally to the point of believing that probability must have many worlds too. Or well, you can have a hidden variables theory of probability too, but the point is it seems like you would have to abandon True Stochasticity.

True Stochasticity vs probability distributions provides a non-quantum example of the non-native embedding, so if you accept the existence of True Stochasticity as distinct from many worlds of simultaneous possibility or ignorance of hidden variables, then that provides a way to understand my objection. Otherwise, I don't yet know a way to explain it, and am not sure one exists.

As for the case of how a new branch of math could describe wavefunctions more natively, there's a tradeoff where you can put in a ton of work and philosophy to make a field of math that describes an object completely natively, but it doesn't actually help the day-to-day work of a mathematician, and it often restricts the tools you can work with (e.g. no excluded middle and no axiom of choice), so people usually don't. Instead they develop their branch of math within classical math with some informal shortcuts.

If the simplest assumption is that the world is just quantum mechanical

It isn't a simpler assumption? Mathematically "one thing is real" is not simpler than "everything is real". And I wouldn't call "philosophically, but not mathematically coherent" objection "technical"? Like, are you saying the mathematical model of true stochasticity (with some "one thing is real" formalization) is somehow incomplete or imprecise or wrong, because mathematics is deterministic? Because it's not like the laws of truly stochastic world are themselves stochastic.

Like, are you saying the mathematical model of true stochasticity (with some "one thing is real" formalization) is somehow incomplete or imprecise or wrong, because mathematics is deterministic?

Which model are you talking about here?

I don't know, any model you like? Space of outcomes with "one outcome is real" axiom. The point is that I can understand the argument for why the true stochasticity may be coherent, but I don't get why it would be better.

with "one outcome is real" axiom

How would you formulate this axiom?

The point is that I can understand the argument for why the true stochasticity may be coherent, but I don't get why it would be better.

I find your post hard to respond to because it asks me to give my opinion on "the" mathematical model of true stochasticity, yet I argued that classical math is deterministic and the usual way you'd model true stochasticity in it is as many-worlds, which I don't think is what you mean (?).

"The" was just me being bad in English. What I mean is:

  1. There is probably a way to mathematically model true stochasticity. Properly, not as many-worlds.
  2. Math being deterministic shouldn't be a problem, because the laws of truly stochastic world are not stochastic themselves.
  3. I don't expect any such model to be simpler than many-worlds model. And that's why you shouldn't believe in true stochasticity.
  4. If 1 is wrong and it's not possible to mathematically model true stochasticity, then it's even worse and I would question your assertion of true stochasticity being coherent.
  5. If you say that mathematical models turn out complex because deterministic math is unnatural language for true stochasticity, then how do you compare them without math? The program that outputs an array is also simpler than the one that outputs one sample from that array.

How would you formulate this axiom?

Ugh, I'm bad at math. Let's say given the space of outcomes O and reality predicate R, the axiom would be .

[-]TAG00

Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience,

Operationally, it's straightforward: you keep "erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction"...all the time murmering under your breath "this is not collapse..this is not collapse".

(Lubos Motl is quoted making a similar comment here https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function?commentId=8CXRntS3JkLbBaasx)