Followup to: The prior of a hypothesis does not depend on its complexity
Eliezer wrote:
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. [...] But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.
Every once in a while I come across some belief in my mind that clearly originated from someone smart, like Eliezer, and stayed unexamined because after you hear and check 100 correct statements from someone, you're not about to check the 101st quite as thoroughly. The above quote is one of those beliefs. In this post I'll try to look at it closer and see what it really means.
Imagine you have a physical theory, expressed as a computer program that generates predictions. A natural way to define the Kolmogorov complexity of that theory is to find the length of the shortest computer program that generates your program, as a string of bits. Under this very natural definition, the many-worlds interpretation of quantum mechanics is almost certainly simpler than the Copenhagen interpretation.
But imagine you refactor your prediction-generating program and make it shorter; does this mean the physical theory has become simpler? Note that after some innocuous refactorings of a program expressing some physical theory in a recognizable form, you may end up with a program that expresses a different set of physical concepts. For example, if you take a program that calculates classical mechanics in the Lagrangian formalism, and apply multiple behavior-preserving changes, you may end up with a program whose internal structures look distinctly Hamiltonian.
Therein lies the rub. Do we really want a definition of "complexity of physical theories" that tells apart theories making the same predictions? If our formalism says Hamiltonian mechanics has a higher prior probability than Lagrangian mechanics, which is demonstrably mathematically equivalent to it, something's gone horribly wrong somewhere. And do we even want to define "complexity" for physical theories that don't make any predictions at all, like "glarble flargle" or "there's a cake just outside the universe"?
At this point, the required fix to our original definition should be obvious: cut out the middleman! Instead of finding the shortest algorithm that writes your algorithm for you, find the shortest algorithm that outputs the same predictions. This new definition has many desirable properties: it's invariant to refactorings, doesn't discriminate between equivalent formulations of classical mechanics, and refuses to specify a prior for something you can never ever test by observation. Clearly we're on the right track here, and the original definition was just an easy fixable mistake.
But this easy fixable mistake... was the entire reason for Eliezer "choosing Bayes over Science" and urging us to do same. The many-worlds interpretation makes the same testable predictions as the Copenhagen interpretation right now. Therefore by the amended definition of "complexity", by the right and proper definition, they are equally complex. The truth of the matter is not that they express different hypotheses with equal prior probability - it's that they express the same hypothesis. I'll be the first to agree that there are very good reasons to prefer the MWI formulation, like its pedagogical simplicity and beauty, but K-complexity is not one of them. And there may even be good reasons to pledge your allegiance to Bayes over the scientific method, but this is not one of them either.
ETA: now I see that, while the post is kinda technically correct, it's horribly confused on some levels. See the comments by Daniel_Burfoot and JGWeissman. I'll write an explanation in the discussion area.
ETA 2: done, look here.
I think that while a sleek decoding algorithm and a massive look-up table might be mathematically equivalent, they differ markedly in what sort of process actually carries them out, at least from the POV of an observer on the same 'metaphysical level' as the process. In this case, the look-up table is essentially the program-that-lists-the-results, and the algorithm is the shortest description of how to get them. The equivalence is because, in some kind of sense, process and results imply each other. In my mind, this a bit like some kind of space-like-information and time-like-information equivalence, or as that between a hologram and the surface it's projected from.
In the end, how are we to ever prefer one kind of description over the other? I can only think that it either comes down to some arbitrary aesthetic appreciation of elegance, or maybe some kind of match between the form of description and how it fits in with our POV; our minds can be described in many ways, but only one corresponds directly with how we observe ourselves and reality, and we want any model to describe our minds with as minimal re-framing as possible.
Now, could someone please tell me if what I have just said makes any kind of sense?!
The minimum size of an algorithm will depend on the context in which it is represented. To meaningfully compare minimum algorithm sizes we must choose a context that represents the essential entities and relationships of the domain in consideration.