These are extracts from some Facebook comments I made recently. I don't think they're actually understandable as is—they're definitely not formal and there isn't an actual underlying formalism I'm referring to, just commonly held intuitions. Or at least intuitions commonly held by me. Ahem. But anyway I figure it's worth a shot.
The following’s a brief over-abstract hint at the theoretical reasoning style that replaced naive anthropics a few years ago, ushered in by the development of updateless-like decision theories which are incredibly inchoate but widely considered a step in a direction that is at the very least rather appealing: “Anthropic”-like “explanations”, as exampled in my original status update, can be generalized to give the now-fundamental ‘relative significances’ for all decision theoretically relevant self-dependent processes—but for humans only as determined by some self-unknown decision policy, or in other words derived from some provably imperfectly self-known “true” “preferences”/”values”/”morality”. This preference-centered decision policy can easily refer to some superficially-"objective" relative existence measure like probability, and thus humans can use Bayes & EU as an approximation that is sorta mathematically sound—that is, until the humans start bothering to pay much attention to their infinitely many different contexts and correlated algorithms, each of which might split differently and mess up the calculations. If even as a human you insist on thinking about these “copies” then Bayes quickly becomes meaningless, and you’ll need to start thinking in terms of decision theory.
([I suspect that an attempt at sane moral reasoning for mere humans must indeed take into account logical correlations with the infinity of Platonic algorithms—but trying to guess at which Platonic algorithm one is “truly” using to make a decision can be immediately recognized as futile by anyone who takes a moment to reflect on any non-trivial decision of their own. Often it’s not clear where to go from there besides looking for different ways of abstracting your decision rule to see if all abstractions exhibit certain stable features—you can thus say you’re deciding to at least some extent for some algorithms that exhibit those features, but that amount of correlation seems awfully weak. (But if the features are rare, does that matter? I’ve yet to think carefully on these matters.) In general humans aren't even close to being able to reason straightforward-causally, let alone timelessly.
Interestingly Eliezer Yudkowsky has talked about this discorrelation as being a limited resource, specifically for agents that aren’t as self-opaque as humans: at some point your Platonic decision algorithm and other agents’ Platonic decision algorithms will start to overlap a little bit, and insofar as they overlap they logically just can’t not make a correlated decision. This makes it easier for each algorithm to predict the other, which can be good, bad, or hilarious, depending on the non-specified aspects of the agents. Perhaps we humans should value our unique utter inability to introspect?])
You might wonder: what if “your” decisions are relevant (“significant”) for some “other” agent that doesn’t share “your” “actual” “values”? The answer can only be that the same copies-ambivalent reasoning applies here too. Without the scare quotes, you can technically be a part of some other agent’s decision algorithm just as much as your naive decision algorithm—you can even switch perspectives right now and think of yourself as some fragment of a larger agent, much like many normal folk identify as Soldiers, Buddhists, Humans, or all of those at once, as well as thinking of themselves as themselves. More intuitively you can think of yourself not just as yourself but as the source of inspiration for everyone else's models of you—models that tend to be very lossy due to economic reasons. Even a superintelligence might not bother to model you very carefully if it has other shit to do (e.g. modeling all possible variations on Babyeaters given increasingly improbable assumed evolutionary lineages). Thus the rules say you can be a pawn in many games at once, as long as multiple processes have a vested interested in you, which they assuredly do. And as previously shown, they’re obviously allowed to be overlapping and at varying levels of abstraction and organization: A forgiving ontology to be sure, but a nightmare for finding Schelling focal points!One neat way this all adds up to normality is that by the time you find any such outlandish decision theoretic “explanations” for finding yourself as you somewhat compelling, you already have a lot of reason to think you’re life is unusually influential, and that evidence screens off any additional “anthropic” update. You’re about as important to you-dependent structures as you think you are, given straightforward causal evidence, and finding yourself as a you-like structure shouldn’t cause you to update on top of that. This same reasoning applies to the original motivator for all this madness, anthropic probabilities: I have yet to see any anthropics problem that involves actually necessary updates and not simply counterfactually necessary belief updates—updates made in counterfactual worlds which would have been accompanied by their own non-anthropic evidence. [I think this is important and deserves fleshing out but I won't feel justified in doing so until I have a formalism to back me up.] And these days I tend to just facepalm when someone suggests I use “human beings” as my "reference class".
A proposal to rationalize derive magick and miracles from updateless-like decision theoretic assumptions:
The Born rule should be derived from timelessness-cognizant game theory for multipartite systems in order to find equilibria from first principles. Deriving it from causal decision theory is clearly circular and Platonic-truth-ignoring “rationality” isn’t rationality anyway. I suspect that the Born rule is generally self-similar across levels of organization but is mostly an approximation/baseline/Schelling-focal-point for influence-influencing systems with gradiently influential decision policies in something like a not-necessarily-timeful Bayesian game.
You can intuitively model this with an ontology of simulations: simulators agree or at least are incentivized to leave certain parts of their simulations constant, and the extent to which different things are left constant falls out of economic and ecological selection effects.
The Born rule is a lot less physics-y than the rest of quantum mechanics, akin to how there is a deep sense in which thermodynamics is more about subjective Bayesianism than about universal laws. Thus it's not too bad a sin to instead think of simulators computing different branches of automata. If they sample uniformly from some boring set of possible automata (given some background laws of physics like Conway’s game of life) then the end result might look like a Born rule, whereas if they differentially sample based on game theoretic equilibria (formalizable by, like, saying their utility functions are over relative entropies of predictions of automata evolution patterns some of which are more interesting in a Schmidhuberian compressibility sense [assuming they agree on reference Turing machine], for example) then there can be interesting disregularities.
(On Google+ I list my occupation as "Theoretical Thaumaturgist". ;P )
No, I just meant that your Hilbert space is associated with a preferred foliation. The states in the Hilbert space are superpositions of configurations on the slices of that foliation. If you follow Copenhagen, observables are real, wavefunctions are not, and this foliation-dependence of the wavefunctions doesn't matter. It's like fixing a gauge, doing your calculation, and then getting gauge-invariant results back for the observables. These results - expectation values, correlation functions... - don't require any preferred foliation for their definition. The wavefunctions do, but they are just regarded as constructs.
So Copenhagen gets to be consistent with special relativity at the price of being incomplete. Now according to Many Worlds, we can obtain a complete description of physical reality by saying that wavefunctions are real. What I am pointing out is that wavefunctions are defined with respect to a reference frame. Time is not an operator and you need surfaces of simultaneity for Schrodinger evolution. The surface of simultaneity that it lives on is one of the necessary ingredients for defining a wavefunction. If the wavefunction is real, then so is the surface of simultaneity, but the whole point of special relativity is that there is no absolute simultaneity. So how do you, a wavefunction realist, get around this?
So, let's return to your example. The wavefunction of the universe is "|U> x ( 3/5 |1> + 4/5 |0> )". Well, this isn't a great example because the wavefunction factorizes. But anyway, let's suppose that the reduced density matrix of your two-state system is c_00 |0><1|. You still need to explain how the Born rule makes sense in terms of a multiverse.
Perhaps an analogy will make this clearer. Suppose I'm a car dealer, and you place an order with me for 9 BMWs and 16 Rolls-Royces. Then you come to collect your order, and what you find is one BMW with a "3" painted on it, and one Rolls-Royce with a "4" painted on it. You complain that I haven't filled the order, and I say, just square the number painted on each car, and you'll get what you want. So far as I can see, that's how MWI works. You work with the same wavefunctions that Copenhagen uses, but you want to do without the Born rule. So instead, you pull out a reduced density matrix, point at the coefficients, and say "you can get your probabilities from those".
That's not good enough. If quantum mechanics is to be explained by Many Worlds, I need to get the Born rule frequencies of events from the frequencies with which those events occur in the multiverse. Otherwise I'm just painting a number on a state vector and saying "square it". If you don't have some way to decompose that density matrix into parts, so that I actually have 9 instances of |1> and 16 instances of |0>, or some other way to obtain Born frequencies by counting branches, then how can you say that Many Worlds makes the right predictions?
Once you get into field theory you have x, y, z and t all treated as coordinates, not operators. The universe realio trulio starts to look like a 4-dimensional object, and reference frames are just slices of this 4-dimensional object. And I guess you're right, if you don't use relativstic quantum mechanics, you won't have all the nice relativstic properties.
If you want your probabilities to be frequencies, I suppose you could work out the results if you wanted. The run-of-identical-experiment frequencies should actually be pretty easy to calculate, and ... (read more)