## Reflexive self-processing is literally infinitely simpler than a many world interpretation

I recently stumbled upon the concept of "reflexive self-processing", which is Chris Langan's "Reality Theory".

I am not a physicist, so if I'm wrong or someone can better explain this, or if someone wants to break out the math here, that would be great.

The idea of reflexive self-processing is that in the double slit experiment for example, which path the photon takes is calculated by taking into account the entire state of the universe when it solves the wave function.

1. isn't this already implied by the math of how we know the wave function works? are there any alternate theories that are even consistent with the evidence?

2. don't we already know that the entire state of the universe is used to calculate the behavior of particles? for example, doesn't every body produce a gravitational field which acts, with some magntitude of force, at any distance, such that in order to calculate the trajectory of a particle to the nth decimal place, you would need to know about every other body in the universe?

This is, literally, infinitely more parsimonious than the many worlds theory, which posits that an infinite number of entire universes of complexity are created at the juncture of every little physical event where multiple paths are possible. Supporting MWI because of it's simplicity was always a really horrible argument for this reason, and it seems like we do have a sensible, consistent theory in this reflexive self-processing idea, which is infinitely simpler, and therefore should be infinitely preferred by a rationalist to MWI.

## A new derivation of the Born rule

This post is an explanation of a recent paper coauthored by Sean Carroll and Charles Sebens, where they propose a derivation of the Born rule in the context of the Many World approach to quantum mechanics. While the attempt itself is not fully successful, it contains interesting ideas and it is thus worthwhile to know.

A note to the reader: here I will try to enlighten the preconditions and give only a very general view of their method, and for this reason you won’t find any equation. It is my hope that if after having read this you’re still curious about the real math, you will point your browser to the preceding link and read the paper for yourself.

If you are not totally new to LessWrong, you should know by now that the preferred interpretation of quantum mechanics (QM) around here is the Many World Interpretation (MWI), which negates the collapse of the wave-function and postulates a distinct reality (that is, a branch) for every base state composing a quantum superposition.

MWI historically suffered from three problems: the absence of macroscopic superpositions, the preferred basis problem, the Born rule derivation. The development of decoherence famously solved the first and, to a lesser degree, the second problem, but the role of the third still remains one of the most poorly understood side of the theory.

Quantum mechanics assigns an amplitude, a complex number, to each branch of a superposition, and postulates that the probability of an observer to find the system in that branch is the (squared) norm of the amplitude. This, very briefly, is the content of the Born rule (for pure states).

Quantum mechanics remains agnostic about the ontological status of both amplitudes and probabilities, but MWI, assigning a reality status to every branch, demotes ontological uncertainty (which branch will become real after observation) to indexical uncertainty (which branch the observer will find itself correlated to after observation).

Simple indexical uncertainty, though, cannot reproduce the exact predictions of QM: by the Indifference principle, if you have no information privileging any member in a set of hypothesis, you should assign equal probability to each one. This leads to forming a probability distribution by counting the branches, which only in special circumstances coincides with amplitude-derived probabilities. This discrepancy, and how to account for it, constitutes the Born rule problem in MWI.

There have been of course many attempts at solving it, for a recollection I quote directly the article:

One approach is to show that, in the limit of many observations, branches that do not obey the Born Rule have vanishing measure. A more recent twist is to use decision theory to argue that a rational agent should act as if the Born Rule is true. Another approach is to argue that the Born Rule is the only well-deﬁned probability measure consistent with the symmetries of quantum mechanics.

These proposals have failed to uniformly convince physicists that the Born rule problem is solved, and the paper by Carroll and Sebens is another attempt to reach a solution.

Before describing their approach, there are some assumptions that have to be clarified.

The first, and this is good news, is that they are treating probabilities as rational degrees of belief about a state of the world. They are thus using a Bayesian approach, although they never call it that way.

The second is that they’re using self-locating indifference, again from a Bayesian perspective.

Self-locating indifference is the principle that you should assign equal probabilities to find yourself in different places in the universe, if you have no information that distinguishes the alternatives. For a Bayesian, this is almost trivial: self-locating propositions are propositions like any other, so the principle of indifference should be used on them as it should on any other prior information. This is valid for quantum branches too.

The third assumption is where they start to deviate from pure Bayesianism: it’s what they call Epistemic Separability Principle, or ESP. In their words:

the outcome of experiments performed by an observer on a speciﬁc system shouldn’t depend on the physical state of other parts of the universe.

This is a kind of a Markov condition: the request that the system is such that it screens the interaction between the observer and the system observed from every possible influence of the environment.

It is obviously false for many partitions of a system into an experiment and an environment, but rather than taking it as a Principle, we can make it an assumption: an experiment is such only if it obeys the condition.

In the context of QM, this condition amounts to splitting the universal wave-function into two components, the experiment and the environment, so that there’s no entanglement between the two, and to consider only interactions that can factors as a product of an evolution for the environment and an evolution for the experiment. In this case, environment evolution act as the identity operator on the experiment, and does not affect the behavior of the experiment wave-function.

Thus, their formulation requires that the probability that an observer finds itself in a certain branch after a measurement is independent on the operations performed on the environment.

Note though, an unspoken but very important point: probabilities of this kind depends uniquely on the superposition structure of the experiment.

A probability, being an abstract degree of belief, can depend on all sorts of prior information. With their quantum version of ESP, Carroll and Sebens are declaring that, in a factored environment, probabilities of a subsystem does not depend on the information one has about the environment. Indeed, in this treatment, they are equating factorization and lack of logical connection.

This is of course true in quantum mechanics, but is a significant burden in a pure Bayesian treatment.

That said, let’s turn to their setup.

They imagine a system in a superposition of base states, which first interacts and decoheres with an environment, then gets perceived by an observer. This sequence is crucial: the Carroll-Sebens move can only be applied when the system already has decohered with a sufficiently large environment.

I say “sufficiently large” because the next step is to consider a unitary transformation on the “system+environment” block. This transformation needs to be of this kind:

- it respects ESP, in that it has to factor as an identity transformation on the “observer+system” block;

- it needs to equally distribute the probability of each branch in the original superposition on a different branch in the decohered block, according to their original relative measure.

Then, by a simple method of rearranging labels of the decohered base, one can show that the correct probabilities comes out by the indifference principle, in the very same way that the principle is used to derive the uniform probability distribution in the second chapter of Jaynes’ Probability Theory.

As an example, consider a superposition of a quantum bit, and say that one branch has a higher measure with respect to the other by a factor of square root of 2. The environment needs in this case to have at least 8 different base states to be relabeled in such a way to make the indifference principle work.

In theory, in this way you can only show that the Born rule is valid for amplitudes which differ one another by the square root of a rational number. Again I quote the paper for their conclusion:

however, since this is a dense set, it seems reasonable to conclude that the Born Rule is established.

Evidently, this approach suffers from a number of limits: the first and the most evident is that it works only in a situation where the system to be observed has already decohered with an environment. It is not applicable to, say, a situation where a detector reads a quantum superposition directly, e.g. in a Stern-Gerlach experiment.

The second limit, although less serious, is that it can work only when the system to be observed decoheres with an environment which has sufficiently base states to distribute the relative measure in different branches. This number, for a transcendental amplitude, is bound to be infinite.

The third limit is that it can only work if we are allowed to interact with the environment in such a way as to leave the amplitudes of the interaction between the system and the observer untouched.

All of these, which are understood as limits, can naturally be reversed and considered as defining conditions, saying: the Born rule is valid only within those limits.

I’ll leave it to you to determine if this constitutes a sufficient answers to the Born rule problem in MWI.

## Lotteries & MWI

I haven't been able to find the source of the idea, but I've recently been reminded of:

Lotteries are a way to funnel some money from many of you to a few of you.

This is, of course, based on the Multiple Worlds Interpretation: if the lottery has one-in-a-million odds, then for every million timelines in which you buy a lottery ticket, in one timeline you'll win it. There's a certain amount of friction - it's not a perfect wealth transfer - based on the lottery's odds. But, looked at from this perspective, the question of "should I buy a lottery ticket?" seems like it might be slightly more complicated than "it's a tax on idiots".

But I'm reminded of my current .sig: "Then again, I could be wrong." And even if this is, in fact, a valid viewpoint, it brings up further questions, such as: how can the friction be minimized, and the efficiency of the transfer be maximized? Does deliberately introducing randomness at any point in the process ensure that at least some of your MWI-selves gain a benefit, as opposed to buying a ticket after the numbers have been chosen but before they've been revealed?

How interesting can this idea be made to be?

## Destructive mathematics

**Follow-up to**: Constructive mathematics and its dual

In last post, I've introduced constructive mathmatics, intuitionistic logic (JL) and its dual, uninspiringly called dual-intuitionistic logic (DL).

I've said that JL differs from classical logic about the status of the *law of excluded middle*, a principle valid in the latter which states that a formula can be meaningfully only asserted or negated. This, in the meta-theory, means you can prove that something is true if you can show that its negation is false.

Constructivists, coming from a philosophical platform that regards mathematics as a construction of the human mind, refuse this principle: their idea is that a formula can be said to be true if and only if there is a direct proof of it. Similarly, a formula can be said to be false if and only if there's a direct proof of its negation. If no proof or refutation exists yet (as is the case today, for example, for the Goldbach conjecture), then *nothing* can be said about A.

Thus is no more a tautology (although it can still be true for some formula, precisely for those that already have a proof or a refutation).

Intuitionism anyway (the most prominent subset of the constructivist program), thinks that is still always false, and so JL incorporates , a principle called *the law of non-contradiction*.

Intuitionistic logic has no built-in model of time, but you can picture the mental activity of an adherent in this way: he starts with no (or very little) truths, and incorporates in his theory only those theorems of which he can build a proof of, and the negation of those theorems that he can produce a refutation of.

Mathematics, as an endeavour, is seen as an accumulation of truth from an empty base.

I've also indicated that there's a direct dual of JL, which is part of a wider class of systems collecively known as paraconsistent logics. Compared to the amount of studies dedicate to intuitionistic logic, DL is basically unknown, but you can consult for example this paper and this one.

In this second article, a model is presented for which DL is valid, and we can read the following quote: "[These semantics] reflect the notion that our current knowledge about the falsity of statements can increase. Some statements whose falsity status was previously indeterminate can down the track be established as false. The value false corresponds to firmly established falsity that is preserved with the advancement of knowledge whilst the value true corresponds to 'not false yet'".

My suggestion is to be a lot braver in our epistemology: let's suppose that the natural cognitive state is not one of utter ignorance, but of triviality. Let's then just assume that **in the beginning, everything is true**.

Our job then, as mathematician, is to discover refutations: the refutation of will expunge A from the set of truth, the refutation of A will remove .

This dual of constructive mathematics just begs to be called destructive mathematics (or destructivism): as a program, it means to start with the maximal possibility and to develop careful collection of falsities.

Be careful though: it doesn't necessarily mean that we accept the existence of actual contradictions. It might be very well the case that in our world (or model of interest) there are no contradictions, we 'just' need to expunge the relevant assertions.

As the dual of constructive mathematics, destructivism regards mathematics as a mental construction, one though that procedes from triviality through confutations.

One major difficulty with destructive mathematics is that, to arrive to a finite set of truths, you need to destroy an infinite amount of falsities (but, on the other side, to arrive to a finite set of falsities in constructive mathematics you need to assert an infinite number of truths).

Usually, we are more interested in truth, so why should we embark in such an effort?

I can see at least two weak and two strong reasons, plus another one that counts as entertainment of which I'll talk about more extensively in the last post.

The first weak reason is that sometimes, we *are* more interested in falsity rather than truth. Destructivism seems to be a more natural background for the calculus of resolution, although, to my knowledge, this has only been developed in classical setting.

The second weak reason is that destructivism is an interesting choice for coalgebraic methods in computer science: there, co-induction and co-recursion are a method for 'observing' or 'destroying' (potentially) infinite objects. From the Wikipedia entry on coinduction: "As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification."

I whish I could say more, but I don't know much myself: the parallelisms are tempting, but I have to leave the discovery of eventual low-hanging fruits to later times or someone else entirely.

Two instead much more promising fields of application are Tegmark universes and the Many World quantum mechanics.

It's difficult to give a cogent account for why all the mathematical structures should exists, but Tegmark position equates simply a platonist point of view on destructivism.

If all formulas are true, then this means that "somewhere" every model is realized, while on the other side, if all structures are realized, then "on the whole", every formula is true (somewhere).

But the most important reason why one should adopt this framework is that it gives a natural account of quantum mechanics in the Many World flavour (MWI).

Usually, physical laws are seeen as the corrispondence between physically realizable states, and time is the "adjunction" of new states from older ones. Do you recognize anything?

What if, instead, physical laws dictates only those states that ought to be excluded and time is simply the 'destruction' or 'localization' of all those possible states? Well, then you have (almost for free) MWI: every state is realized, but in times you are constrained to just one.

I'm extremely tempted to say that MWI is the dual of the wave function collapse, but of course I cannot (yet) prove it. Or should I just say that I cannot yet disprove it's not like that?

If that's the case, the mystery of why subjective probability follows the Born rule will be 'just' the dual of the non-linear mechanism of collapse. One mystery for a mystery.

I also suspect that destructive mathematics might have implication even for probability theory, but... This framework is still in its infancy, so who knows?

The last interesting motivation for taking seriously destructive mathematics is that it offers a possible coherent account of Chtulhu mythos (!!): what if God, instead of having created only this world from nothing out of pure love, has destructed every world but this one out of pure hate? If you accept the first scenario, then the second scenario is equally plausible / conceivable. I'll explore the theme in the last post: Azathoth hates us all!

## If MWI is correct, should we expect to experience Quantum Torment?

If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there's at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.

However, the transition from life to death isn't usually a binary change. For most people it happens slowly as your brain and the rest of your body deteriorates, often painfully.

Doesn't it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?

I don't know much about quantum mechanics, so I don't have anything to contribute to this discussion. I'm just terrified, and I'd like, not to be reassured by well-meaning lies, but to know the truth. How likely is it that Quantum Torment is real?

## Hacking Quantum Immortality

Quantum immortality sounds exactly like the mythical hell: living forever in perpetual agony, unable to die and in your own branch of existence separate from everyone else you ever knew.

What if we can hack quantum immortality to force continued good health, and the mutual survival of our loved ones in the same branch of the universe as us?

It seems like one would "simply" need a device which monitors your health with biosensors, and if anything goes out of range- it instantly kills you in a manner with extremely low probability of failure. All of your friends and family would wear a similar device, and they would be coupled such that if one person becomes "slightly unhealthy" you all die instantly, keeping you all alive and healthy together.

We nearly have the technology to build such a thing now. Would you install one in your own body? If not, why not?

Who wants to invest in my new biotech startup which promises to stop all disease and human suffering within the next decade? Just joking, there is a serious technical problem here that makes it considerably more difficult than it sounds: for such a device to work the probability of it's failure must be much much less than the probability of your continued healthy survival. You also never get to test the design before you use it.

## Help: Is there a quick and dirty way to explain quantum immortality?

I had an incredibly frustrating conversation this morning trying to explain the idea of quantum immortality to someone whose understanding of MWI begins and ends at pop sci fi movies. I think I've identified the main issue that I wasn't covering in enough depth (continuity of identity between near-identical realities) but I was wondering whether anyone has ever faced this problem before, and whether anyone has (or knows where to find) a canned 5 minute explanation of it.