Followup to: The prior of a hypothesis does not depend on its complexity
Eliezer wrote:
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. [...] But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.
Every once in a while I come across some belief in my mind that clearly originated from someone smart, like Eliezer, and stayed unexamined because after you hear and check 100 correct statements from someone, you're not about to check the 101st quite as thoroughly. The above quote is one of those beliefs. In this post I'll try to look at it closer and see what it really means.
Imagine you have a physical theory, expressed as a computer program that generates predictions. A natural way to define the Kolmogorov complexity of that theory is to find the length of the shortest computer program that generates your program, as a string of bits. Under this very natural definition, the many-worlds interpretation of quantum mechanics is almost certainly simpler than the Copenhagen interpretation.
But imagine you refactor your prediction-generating program and make it shorter; does this mean the physical theory has become simpler? Note that after some innocuous refactorings of a program expressing some physical theory in a recognizable form, you may end up with a program that expresses a different set of physical concepts. For example, if you take a program that calculates classical mechanics in the Lagrangian formalism, and apply multiple behavior-preserving changes, you may end up with a program whose internal structures look distinctly Hamiltonian.
Therein lies the rub. Do we really want a definition of "complexity of physical theories" that tells apart theories making the same predictions? If our formalism says Hamiltonian mechanics has a higher prior probability than Lagrangian mechanics, which is demonstrably mathematically equivalent to it, something's gone horribly wrong somewhere. And do we even want to define "complexity" for physical theories that don't make any predictions at all, like "glarble flargle" or "there's a cake just outside the universe"?
At this point, the required fix to our original definition should be obvious: cut out the middleman! Instead of finding the shortest algorithm that writes your algorithm for you, find the shortest algorithm that outputs the same predictions. This new definition has many desirable properties: it's invariant to refactorings, doesn't discriminate between equivalent formulations of classical mechanics, and refuses to specify a prior for something you can never ever test by observation. Clearly we're on the right track here, and the original definition was just an easy fixable mistake.
But this easy fixable mistake... was the entire reason for Eliezer "choosing Bayes over Science" and urging us to do same. The many-worlds interpretation makes the same testable predictions as the Copenhagen interpretation right now. Therefore by the amended definition of "complexity", by the right and proper definition, they are equally complex. The truth of the matter is not that they express different hypotheses with equal prior probability - it's that they express the same hypothesis. I'll be the first to agree that there are very good reasons to prefer the MWI formulation, like its pedagogical simplicity and beauty, but K-complexity is not one of them. And there may even be good reasons to pledge your allegiance to Bayes over the scientific method, but this is not one of them either.
ETA: now I see that, while the post is kinda technically correct, it's horribly confused on some levels. See the comments by Daniel_Burfoot and JGWeissman. I'll write an explanation in the discussion area.
ETA 2: done, look here.
MWI and Copenhagen do not make the same predictions in all cases, just in testable ones. There is a simple program that makes the same predictions as MWI in all cases. There appears to be no comparably simple program that makes the same predictions as Copenhagen in all cases. So, if you gave me some complicated test which could not be carried out today, but on which the predictions of MWI and Copenhagen differed, and asked me to make a prediction about what would happen if the experiment was somehow run (it seems likely that such experiments will be possible at some point in the extremely distant future) I would predict that MWI will be correct with overwhelming probability. I agree that if some other "more complicated" theory made the same predictions as MWI in every case, then K-complexity would not give good grounds to decide between them.
I guess the fundamental disagreement is that you think MWI and Copenhagen are the same theory because discriminating between them is right now far out of reach. But I think the existence of any situation where they make different hypotheses is precisely sufficient to consider them different theories. I don't know why "testable" (meaning testable in practice, not in theory) was thrown in at the last minute, because it does not seem to appear anywhere in the rest of the post.
If instead you are asserting that MWI and Copenhagen make the same theoretically testable predictions, then I disagree as a matter of fact. MWI asserts that interference should be able to occur on arbitrary scales, in particular on the scale of an entire planet or galaxy (even though such interference is spectacularly difficult to engineer and/or will have a very small effect on probability amplitudes), while Copenhagen seems to imply that it cannot occur on any scale larger than a human observer.
I wouldn't be surprised if I am wrong on that question of fact, and it would certainly be good for me to fix my error now if I am.
Copenhagen doesn't imply that. The collapse happens as a result of interaction between the observer and the observed system, which can be an atom or an entire gallaxy.