Unless there is a surprising amount of coherence between worlds with different lottery outcomes, this mangled worlds model should still be vulnerable to my lottery winning technique (split the world a bunch of times if you win).
Hi,
I haven't commented on a while. I'm just curious, are there any non-physicists who are able to follow this whole quantum-series? I've given up some posts ago.
Peace!
You wouldn't have to stand off beside the mathematical structure of the universe, and say, "Okay, now that you're finished computing all the mere numbers, I'm furthermore telling you that the squared modulus is the 'degree of existence'."
Instead, you'd have to stand off beside the mathematical structure of the universe, and say, "Okay, now that you're finished computing all the mere numbers, I'm furthermore telling you that the world count is the 'degree of existence'."
A major problem with Robin's theory is that it seems to predict things like, "We should find ourselves in a universe in which lots of decoherence events have already taken place," which tendency does not seem especially apparent.
Actually the theory suggests we should find ourselves in a state with near the least feasible number of past decoherence events. Yes, it is not clear if this in fact holds, and yes I'd put the chance of something like mangled worlds being right as more like 1/4 or 1/3.
Thanks to Eliezer's QM series, I'm starting to have enough background to understand Robin's paper (kind of, maybe). And now that I do (kind of, maybe), it seems to me that Robin's point is completely demolished by Wallace's points about decoherence being continuous rather than discrete and therefore there being no such thing as a number of discrete worlds to count.
There seems to be nothing to resolve between the probabilities given by measure and the probabilities implied by world count if you simply say that measure is probability.
Eliezer objects. We're...
Eddie,
My understanding of Eli's beef with the Born rule is this (he can correct me if I'm wrong): the Born rule appears to be a bridging rule in fundamental physics that directly tells us something about how qualia bind to the universe. This seems odd. Furthermore, if the binding of qualia to the universe is given by a separate fundamental bridging rule independent of the other laws of physics, then the zombie world really is logically possible, or in other words epiphenomenalism is true. (Just postulate a universe with all the laws of physics except Born...
None of the confusion over duplication and quantum measures seems unique to beings with qualia; any Bayesian system capable of anthropic reasoning, it would seem, should be surprised the universe is orderly. So maybe either the confusion is separate from and deeper than experience, or AIXItl has qualia.
As I understand it (someone correct me if I'm wrong), there are two problems with the Born rule: 1) It is non-linear, which suggests that it's not fundamental, since other fundamental laws seem to be linear
2) From my reading of Robin's article, I gather that the problem with the many-worlds interpretation is: let's say a world is created for each possible outcome (countable or uncountable). In that case, the vast majority of worlds should end up away from the peaks of the distribution, just because the peaks only occupy a small part of any distribution.
Rob...
Nick: I don't understand the connection to quantum mechanics.
The argument that I commonly see relating quantum mechanics to anthropic reasoning is deeply flawed. Some people seem to think that many worlds means there are many "branches" of the wavefunction and we find ourselves in them with equal probability. In this case, they argue, we should expect to find ourselves in a disorderly universe. However, this is exactly what the Born rule (and experiment!) does not say. Rather, the Born rule says that we are only likely to find ourselves in states...
In this case epiphenomenalism would be true (since qualia have no effect on the physical world), but the correlation would not be a coincidence (since the physical world directly causes qualia).
But the nature of the experiences we claimed to have would not depend in any way on the properties of these hypothetical 'qualia'. There would be no event in the physical world that would be affected by them - they would not, in fact, exist.
Epiphenomenalism is never true, because it contains a contradiction in terms.
Here's a different question which may be relevant: why unitary transforms?
That is, if you didn't in the first place know about the Born rule, what would be a (even semi) intuitive justification for the restriction that all "reasonable" transforms/time evolution operators have to conserve the squared magnitude?
Given the Born rule, it seems rather obvious, but the Born rule itself is what is currently appears to be suspiciously out of place. So, if that arises out of something more basic, then why the unitary rule in the first place?
Stephen, thanks for your thoughts on Eli's thoughts. I'm going to have to think on them further - after all these helpful posts I can pretend I understand quantum mechanics, but pretending to understand how conscious minds perceive a single point in configuration space instead of blobs of amplitude is going to take more work.
I will point out, though, that the question of how consciousness is bound to a particular branch (and thus why the Born rule works like it does) doesn't seem that much different from how consciousness is tied to a particular point in ...
"Given the Born rule, it seems rather obvious, but the Born rule itself is what is currently appears to be suspiciously out of place. So, if that arises out of something more basic, then why the unitary rule in the first place?"
While not an answer, I know of a relevant comment. Suppose you assume that a theory is linear and preserves some norm. What norm might it be? Before addressing this, let's say what a norm is. In mathematics a norm is defined to be some function on vectors that is only zero for the all zeros vector, and obeys the triangle i...
"I will point out, though, that the question of how consciousness is bound to a particular branch (and thus why the Born rule works like it does) doesn't seem that much different from how consciousness is tied to a particular point in time or to a particular brain when the Spaghetti Monster can see all brains in all times and would have to be given extra information to know that my consciousness seems to be living in this particular brain at this particular time."
Agreed!
More generally, it seems to me that many objections people raise about the fo...
Psy-Kosh, the amplitudes of everything everywhere could be changing by a constant modulus and phase, without it being noticed. But if it were possible for you to carry out some physical process that changed the squared modulus of the LEFT blob as a whole, without splitting it and without changing the squared modulus of the RIGHT blob, then you would be able to use this physical process to change the ratio of the squared moduli of LEFT and RIGHT, hence control the outcome of arbitrary quantum experiments by invoking it selectively.
It would be an Outcome Pu...
Stephen: Thanks. First, not everything corresponding to a length or such obeys that particular rule... consider the Lorenz metric... any "lightlike" vector has a norm of zero, for instance, and yet that particular matric is rather useful physically. :) (admittedly, you get that via the minus sign, and if your norm is such that it treats all the components in some sense equivalently, you don't get that... well, what about norms involving cross terms?)
More to the subject... why is any norm preserved? That is, why only allow norm preserving transfor...
Psy-Kosh:
Good example with the Lorentz metric.
Invariance of norm under permutations seems a reasonable assumption for state spaces. On the other hand, I now realize the answer to my question about whether permutation invariance narrows things down to p-norms is no. A simple counterexample is a linear combination of two different p-norms.
I think there might be a good reason to think in terms of norm-preserving maps. Namely, suppose the norms can be anything but the individual amplitudes don't matter, only their ratios do. That is, states are identified not ...
I'm struck by guilt for having spoken of "ratios of amplitudes". It makes the proposal sound more specific and fully worked-out than it is. Let me just replace that phrase in my previous post with the vaguer notion of "relative amplitudes".
Stephen: Is the point you're making basically along the lines of "vector as geometric object rather than list of numbers"?
Sure, I buy that. Heck, I'm naturally inclined toward that perspective at this time. (In part because have been studying GR lately)
Aaanyways, so I guess basically what you're saying is that all operators corresponding to time evolution or whatever are just rotations or such in the space? And why the 2-norm instead of, say, the 1-norm? why would the universe "prefer" to preserve the sum of the squared magnitudes rathe...
@Roland: My physics and maths is patchy but I'm still just about following (the posts - some comments are way too advanced) though it is hard work for some bits. Lots of slow re-reading, looking things up and revising old posts, but it's worth it.
If you're determined enough, try reading the posts a few at a time (instead of one a day) starting a few posts before where you got stuck, and make sure you "get" each one before you move on, even if it means an hour on another web source studying the thing you don't understand in Eliezer's explanation.
Psy-Kosh:
"Or did I completely and utterly misunderstand what you were trying to say?"
No, you are correctly interpreting me and noticing a gap in the reasoning of my preceeding post. Sorry about that. I re-looked-up Scott's paper to see what he actually said. If, as you propose, you allow invertible but non-norm-preserving time evolutions and just re-adjust the norm afterwards then you get FTL signalling, as well as obscene computational power. The paper is here.
A major problem with Robin's theory is that it seems to predict things like, We should find ourselves in a universe in which lots of decoherence events have already taken place," which tendency does not seem especially apparent.
Actually the theory suggests we should find ourselves in a state with near the least feasible number of past decoherence events
I don't understand this - doesn't decoherence occur all the time, in every quantum interaction between all amplitudes all the time? So, like for every amptlitude separate enough to be a "particle&q...
Stephen: I don't have a postscript viewer.
Wait, I thought the superpower stuff only happens if you allow nonlinear transforms, not just nonunitary. Let's add an additional restriction: let's actually throw in some notion of locality, but even with the locality, abandon unitaryness. So our rules are "linear, local, invertable" (no rescaling aftarwards... not defining a norm to preserve in the first place)... or does locality necessitate unitarity? (is unitarity a word? Well, you know what I mean. Maybe I should say orthognality instead?)
Well, actu...
"If you didn't know squared amplitudes corresponded to probability of experiencing a state, would you still be able to derive "nonunitary operator -> superpowers?""
Scott looks at a specific class of models where you assume that your state is a vector of amplitudes, and then you use a p-norm to get the corresponding probabilities. If you demand that the time evolutions be norm-preserving then you're stuck with permutations. If you allow non-norm-preserving time evolution, then you have to readjust the normalization before calculating ...
Stephen: Aaah, okay. And yeah, that's why I said no rescaling.
I mean, if one didn't already have the "probability of experiencing something is linear in p-norm..." thing, would one still be able to argue superpowers?
From your description, it looks like he still has to use the princple of "probability of experiencing something proportional to p-norm" to justify the superpowers thing.
Browsed through the paper, and, if I interpreted it right, that is kinda what it was doing... Assume there's some p-norm corresponding to probability. But ma...
are all the norms invariant under permutation of the indices p-norms?
Well, you answered that exact question, but here's a description of all norms (on a finite dimensional real vector space): a norm determines the set of all vectors of norm less than or equal to 1. This is convex and symmetric under inverting sign (if you wanted complex, you'd have to allow multiplication by complex units). It determines the norm: the norm of a vector is the amount you have to scale the set to envelope the vector. Any set satisfying those conditions determines a norm.
So th...
Weren't the Born probabilities successfully derived from decision theory for the MWI in 2007 by Deutsch: "Probabilities used to be regarded as the biggest problem for Everett, but ironically, they are now its most powerful success" - http://forum.astroversum.nl/viewtopic.php?p=1649
If anyone can produce a cellular automata model that can create circles like those which relate to the inverse square of distance or the stuff of early wave mechanics, I think I can bridge the MWI view and the one universe of many fidgetings view that I cling to. I know of one other person who has a similar idea, unfortunately his idea has a bizarre quantity which is the square root of a meter.
Consider for example what "scattering experiments" show, in a context of imagining that the universe is made of fields and that only "observation" makes a manifestation in a small region of space? I mean, suppose we think of the "observations" as being our detecting the impacts of the "scattered" electrons rather than the scatterings themselves. (IOW, we don't consider "mere" interactions to be observations - whatever that means.) But then why and how did the waves representing the electrons scatter as if o...
My guess is that the Born's Rule is related to the Solomonoff Prior. Consider a program P that takes 4 inputs:
What P does is take the boundary conditions, use Schrödinger's equation to compute the wavefunction at time T, then sample the wavefunction using the Born probabilities and the random input string, and finally output the particles in the region R and their relative positions.
Suppose this program, along with the inputs that cause it to output the descrip...
The Transactional Interpretation of QM resolves the mystery of where this nonlinear squared modulus comes from quite neatly. On that basis alone, I'm surprised that Eliezer doesn't even mention it as a serious rival to MWI.
See http://www.npl.washington.edu/npl/int_rep/tiqm/TI_toc.html
First of all - great sequence! I had a lot of 'I see!'-moments reading it. I study physics, but often the clear picture gets lost in the standard approach and one is left with a lot of calculating techniques without any intuitive grasp of the subject. After reading this I became very fond of tutoring the course on quantum mechanics and always tried to give some deeper insight (many of which was taken from here) in addition to just explaining the exercises. If I am correct, the world mangling theory just tries to explain some anomalies, but the rule of squa...
First of all - great sequence! I had a lot of 'I see!'-moments reading it. I study physics, but often the clear picture gets lost in the standard approach and one is left with a lot of calculating techniques without any intuitive grasp of the subject. After reading this I became very fond of tutoring the course on quantum mechanics and always tried to give some deeper insight (many of which was taken from here) in addition to just explaining the exercises. If I am correct, the world mangling theory just tries to explain some anomalies, but the rule of squa...
Suppose that the probability of an observer-moment is determined by its complexity, instead of the probability of a universe being determined by its complexity and the probability of an observation within that universe being described by some different anthropic selection.
You can specify a particular human's brain by describing the universal wave function and then pointing to a brain within that wave function. Now the mere "physical existence" of the brain is not relevant to experience; it is necessary to describe precisely how to extract a descr...
could the flow of amplitude between blobs we normally think of as separated following a measurement possibly explain the quantum field theory prediction/phenomenon of vacuum fluctuations?
I'm a bit puzzled by the problem here. What's wrong with the interpretation that the Born probabilities just are the limiting frequencies in infinite independent repetitions of the same experiment? Further, that these limiting frequencies really are defined because the universe really is spatially infinite, with infinitely many causally isolated regions. There is nothing hypothetical at all about the infinite repetition - it actually happens.
My understanding is that in such a universe model, the Everett-Wheeler version of quantum theory makes a precise pre...
Perhaps I'm being too simplistic, but I see a decent explanation that doesn't get as far into the weeds as some of the others. It's proportional to the square because both the event being observed and the observer need to be in the same universe. If the particle can be in A or B, the odds are:
P(A)&O(A) = A^2
P(B)&O(B) = B^2
P(A)&O(B) = Would be AB, but this is physically impossible.
P(B)&O(A) = Would be AB, but this is physically impossible.
Squares fall out naturally.
Previously in series: Decoherence is Pointless
Followup to: Where Experience Confuses Physicists
One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of. What does the integral over the squared modulus of the amplitude density have to do with anything?
This was discussed by analogy in "Where Experience Confuses Physicists", and I won't repeat arguments already covered there. I will, however, try to convey exactly what the puzzle is, in the real framework of quantum mechanics.
A professor teaching undergraduates might say: "The probability of finding a particle in a particular position is given by the squared modulus of the amplitude at that position."
This is oversimplified in several ways.
First, for continuous variables like position, amplitude is a density, not a point mass. You integrate over it. The integral over a single point is zero.
(Historical note: If "observing a particle's position" invoked a mysterious event that squeezed the amplitude distribution down to a delta point, or flattened it in one subspace, this would give us a different future amplitude distribution from what decoherence would predict. All interpretations of QM that involve quantum systems jumping into a point/flat state, which are both testable and have been tested, have been falsified. The universe does not have a "classical mode" to jump into; it's all amplitudes, all the time.)
Second, a single observed particle doesn't have an amplitude distribution. Rather the system containing yourself, plus the particle, plus the rest of the universe, may approximately factor into the multiplicative product of (1) a sub-distribution over the particle position and (2) a sub-distribution over the rest of the universe. Or rather, the particular blob of amplitude that you happen to be in, can factor that way.
So what could it mean, to associate a "subjective probability" with a component of one factor of a combined amplitude distribution that happens to factorize?
Recall the physics for:
Think of the whole process as reflecting the good-old-fashioned distributive rule of algebra. The initial state can be decomposed—note that this is an identity, not an evolution—into:
We assume that the distribution factorizes. It follows that the term on the left, and the term on the right, initially differ only by a multiplicative factor of Atom-LEFT vs. Atom-RIGHT.
If you were to immediately take the multi-dimensional integral over the squared modulus of the amplitude density of that whole system,
Then the ratio of the all-dimensional integral of the squared modulus over the left-side term, to the all-dimensional integral over the squared modulus of the right-side term,
Would equal the ratio of the lower-dimensional integral over the squared modulus of the Atom-LEFT, to the lower-dimensional integral over the squared modulus of Atom-RIGHT,
For essentially the same reason that if you've got (2 * 3) * (5 + 7), the ratio of (2 * 3 * 5) to (2 * 3 * 7) is the same as the ratio of 5 to 7.
Doing an integral over the squared modulus of a complex amplitude distribution in N dimensions doesn't change that.
There's also a rule called "unitary evolution" in quantum mechanics, which says that quantum evolution never changes the total integral over the squared modulus of the amplitude density.
So if you assume that the initial left term and the initial right term evolve, without overlapping each other, into the final LEFT term and the final RIGHT term, they'll have the same ratio of integrals over etcetera as before.
What all this says is that,
If some roughly independent Atom has got a blob of amplitude on the left of its factor, and a blob of amplitude on the right,
Then, after the Sensor senses the atom, and you look at the Sensor,
The integrated squared modulus of the whole LEFT blob, and the integrated squared modulus of the whole RIGHT blob,
Will have the same ratio,
As the ratio of the squared moduli of the original Atom-LEFT and Atom-RIGHT components.
This is why it's important to remember that apparently individual particles have amplitude distributions that are multiplicative factors within the total joint distribution over all the particles.
If a whole gigantic human experimenter made up of quintillions of particles,
Interacts with one teensy little atom whose amplitude factor has a big bulge on the left and a small bulge on the right,
Then the resulting amplitude distribution, in the joint configuration space,
Has a big amplitude blob for "human sees atom on the left", and a small amplitude blob of "human sees atom on the right".
And what that means, is that the Born probabilities seem to be about finding yourself in a particular blob, not the particle being in a particular place.
But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? Here's the map—where's the territory?
I don't know. It's an open problem. Try not to go funny in the head about it.
This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics. Everything else—everything else—obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.
When you think about the weather in terms of clouds and flapping butterflies, it may not look linear on that higher level. But the amplitude distribution for weather (plus the rest of the universe) is linear on the only level that's fundamentally real.
Does this mean that the squared-modulus business must require additional physics beyond the linear laws we know—that it's necessarily futile to try to derive it on any higher level of organization?
But even this doesn't follow.
Let's say I have a computer program which computes a sequence of positive integers that encode the successive states of a sentient being. For example, the positive integers might describe a Conway's-Game-of-Life universe containing sentient beings (Life is Turing-complete) or some other cellular automaton.
Regardless, this sequence of positive integers represents the time series of a discrete universe containing conscious entities. Call this sequence Sentient(n).
Now consider another computer program, which computes the negative of the first sequence: -Sentient(n). If the computer running Sentient(n) instantiates conscious entities, then so too should a program that computes Sentient(n) and then negates the output.
Now I write a computer program that computes the sequence {0, 0, 0...} in the obvious fashion.
This sequence happens to be equal to the sequence Sentient(n) + -Sentient(n).
So does a program that computes {0, 0, 0...} necessarily instantiate as many conscious beings as both Sentient programs put together?
Admittedly, this isn't an exact analogy for "two universes add linearly and cancel out". For that, you would have to talk about a universe with linear physics, which excludes Conway's Life. And then in this linear universe, two states of the world both containing conscious observers—world-states equal but for their opposite sign—would have to cancel out.
It doesn't work in Conway's Life, but it works in our own universe! Two quantum amplitude distributions can contain components that cancel each other out, and this demonstrates that the number of conscious observers in the sum of two distributions, need not equal the sum of conscious observers in each distribution separately.
So it actually is possible that we could pawn off the only non-linear phenomenon in all of quantum physics onto a better understanding of consciousness. The question "How many conscious observers are contained in an evolving amplitude distribution?" has obvious reasons to be non-linear.
(!)
Robin Hanson has made a suggestion along these lines.
(!!)
Decoherence is a physically continuous process, and the interaction between LEFT and RIGHT blobs may never actually become zero.
So, Robin suggests, any blob of amplitude which gets small enough, becomes dominated by stray flows of amplitude from many larger worlds.
A blob which gets too small, cannot sustain coherent inner interactions—an internally driven chain of cause and effect—because the amplitude flows are dominated from outside. Too-small worlds fail to support computation and consciousness, or are ground up into chaos, or merge into larger worlds.
Hence Robin's cheery phrase, "mangled worlds".
The cutoff point will be a function of the squared modulus, because unitary physics preserves the squared modulus under evolution; if a blob has a certain total squared modulus, future evolution will preserve that integrated squared modulus so long as the blob doesn't split further. You can think of the squared modulus as the amount of amplitude available to internal flows of causality, as opposed to outside impositions.
The seductive aspect of Robin's theory is that quantum physics wouldn't need interpreting. You wouldn't have to stand off beside the mathematical structure of the universe, and say, "Okay, now that you're finished computing all the mere numbers, I'm furthermore telling you that the squared modulus is the 'degree of existence'." Instead, when you run any program that computes the mere numbers, the program automatically contains people who experience the same physics we do, with the same probabilities.
A major problem with Robin's theory is that it seems to predict things like, "We should find ourselves in a universe in which
lots ofvery few decoherence events have already taken place," which tendency does not seem especially apparent.The main thing that would support Robin's theory would be if you could show from first principles that mangling does happen; and that the cutoff point is somewhere around the median amplitude density (the point where half the total amplitude density is in worlds above the point, and half beneath it), which is apparently what it takes to reproduce the Born probabilities in any particular experiment.
What's the probability that Hanson's suggestion is right? I'd put it under fifty percent, which I don't think Hanson would disagree with. It would be much lower if I knew of a single alternative that seemed equally... reductionist.
But even if Hanson is wrong about what causes the Born probabilities, I would guess that the final answer still comes out equally non-mysterious. Which would make me feel very silly, if I'd embraced a more mysterious-seeming "answer" up until then. As a general rule, it is questions that are mysterious, not answers.
When I began reading Hanson's paper, my initial thought was: The math isn't beautiful enough to be true.
By the time I finished processing the paper, I was thinking: I don't know if this is the real answer, but the real answer has got to be at least this normal.
This is still my position today.
Part of The Quantum Physics Sequence
Next post: "Decoherence as Projection"
Previous post: "Decoherent Essences"