First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. So saying that something has a Kolmogorov complexity of 500 bits only makes sense if the model of computation has been defined. In this case, it has - the Universal Turing Machine model. The reason I'm mentioning this is that a particular simulation might have a wildly differing complexity when specified on a Turing Machine as opposed to, say, x86 machine code (which is typically very complex and contains a lot of redundancies, greatly inflating bit counts).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity (using the Earth as an example):
If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.
If you want to print out just Earth by itself, then it's not enough to specify the laws of physics. You also have to point to just Earth within the universe.
This is why you can have computer programs, like from the demo scene, that seemingly can't be compressed smaller than thousands of bits, despite existing in a low-complexity Universe. To specify a program, you have to give enough information to pick it out from the sea of other allowed programs.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes. According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don't exactly know where he got it from either). Note that this would be an upper bound. It's possible that the Universe is much simpler than what string theory suggests.
That said, that's the complexity under the string theory model, not the turing machine model. So the Kolmogorov complexity under the Turing machine model would be less than 500+(the amount of information needed to specify string theory under the Turing machine model). The latter would also probably be a few hundred bits (string theory's theoretical core is quite simple once you strip away all the maths that is needed for doing calculations).
So Eliezer might be wrong about that particular figure but it would surprise me if it were many orders of magnitude off the mark.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes
Of course not. You have to specify the landscape itself, otherwise it's like saying "page 273 of [unspecified book]" .
According to string theory (which is a Universal theory in the sense that it is Turing-complete)
As far as I can see, that is only true in that ST allows Turing machines to exist physically. That's not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable, not requiring hypercomputation. Or whatever is actually the ultimate physical theory. Because K complexity doesn't work other wise. And the computability of physics is far from a given:-
https://en.m.wikipedia.org/wiki/Computability_in_Analysis_and_Physics
Note that the fact that a theory might consist of a small number differential equations is quite irrelevant, because any one equation could be uncomputable.
First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation.
And that is why it makes little sense to talk about the complexity of one object without comparing or using it in relation to other objects, like the OP asked. If you implement the law of physics in Brainfuck code, on a x86 instruction set, or on a machine specifically dedicated to have the law of physics as a command, you can get wildly different complexities, even down to a few bit (think about the UTM that has a specific instruction that emulates the law of physics).
Does it even make sense to talk of relative Kolmogorov complexity when we're dealing with finite strings? For any two finite strings A and B, it is always possible to find some model of computation in which A is less complex than B (and vice versa, of course).
It does make sense to talk of relative KC, actually, but it's not trivial to see why: http://en.wikipedia.org/wiki/Kolmogorov_complexity#Invariance_theorem
In particular, "If K1 and K2 are the complexity functions relative to description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
\forall s\ |K_1(s) - K_2(s)| \leq c. "
Yes, I'm aware of this. It's still true, I believe, that for any two finite strings s1 and s2 one can find description languages L1 and L2 (with complexity functions K1 and K2) such that
K1(s1) > K1(s2)
and
K2(s1) < K2(s2).
So there is no language-independent sense in which s1 is more complex than s2 (or vice versa). To make the claim more concrete, consider the fact that for any finite string, one could construct a Universal Turing Machine that outputs that string when given the input 0 (the string is essentially hard-coded into the structure of the machine). This corresponds to a description language in which that string has minimal K-complexity.
This is all compatible with the invariance theorem. As a simple illustration, let the constant c associated with L1 and L2 be 5, let K1(s1) be 10, K1(s2) be 9, K2(s1) be 6 and K2(s2) be 8. In this example, both the inequalities I've given above are true, and the invariance theorem isn't violated.
First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. [...]
Right, if you do something like (say) take away the addition instruction then the shortest program might get longer because it has to emulate addition using subtraction and negation (or use a totally different approach, or include a translation pass that expands the additions into negate+subtract or include a universal turing machine that runs the original machine or... yeah).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity.
In this case I care about the complexity of the laws that govern the change in state (positions of atoms or values of the wave functions) with respect to time. I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms. That would make talking about the complexity of the laws laughably negligible. I realize that this is, in a sense, an arbitrary distinction but... that's the question I want to know the answer to.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don't exactly know where he got it from either).
Interesting guess at where the number could have come from. I just assumed he tallied up various aspects of the standard model somehow and got an answer between 450 and 510.
The standard model is over here, see page 36: http://arxiv.org/pdf/hep-th/0610241v1.pdf
I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms.
You don't have any atoms in the initial state - nor anything that can reasonably be called matter. My (very ignorant) guess is that we won't even know what it takes to specify the initial state before we have a unified GR+QFT theory.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information.
If one wishes to describe oneself in a particular universe, then, assuming MWI, fixing the universe is peanuts, complexity-wise, compared to fixing the appropriate Everett branch. The number of bits there is just astounding, it would seem to me.
The standard model is over here, see page 36: http://arxiv.org/pdf/hep-th/0610241v1.pdf
Uh, wow, that's a somewhat large equation. It has like 500 terms. Seems... inconsistent with physicists seeing beauty in physics.
You can see it in just five lines here, page 1. And an even more compact formulation would just list the symmetry groups, the various fields and how they transform under each group, and would then stipulate that the Lagrangian contains every possible renormalizable term (which is a principle in the construction of such theories, since renormalizable terms that aren't included get generated anyway).
The fundamental constants alone are going to consume ~20-30 bits each.
Not if those constants are free variables, or can computed in some way we don't understand. Frankly, I'd find it a bit ugly if those constants were hard-coded into the fabric of the universe.
Well, several of the universal constants arguably define our units. For every base type of physical quantity (things like distance, time, temperature, and mass, but not, for example, speed, which can be constructed out of distance and time), you can set a physical constant to 1 if you're willing to change how you measure that property. For example, you can express distance in terms of time (measuring distance in light-seconds or light-years). By doing so, you can discard the speed of light: set it to 1. Speeds are now ratios of time to time: something moving at 30% the speed of light would move 0.3 (light) seconds per second: their speed would be the dimensionless quantity 0.3. You can drop many other physical constants in this fashion: Offhead, the speed of light, the gravitational constant, planks constant, the coulomb constant, and the Boltzmann constant can all be set to 1 without any trouble, and therefore don't count against your complexity budget.
First not: I'm not disagreeing with you so much as just giving more information.
This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.
As a reference for anyone encountering this discussion, I thought I'd mention Natural Units explicitly. Basically, they are the systems of units that particle physicists use. They are attempts to normalize out as many fundamental constants as possible, exactly as you discuss.
Unfortunately, you can't build a system that gets them all. You are always left with some fraction over pi, or the square root of the fine-structure constant, or something.
I agree that the constants might be related in ways we don't know, which would allow compression. I'm more interested in an upper bound on the complexity than an exact value (which is likely incomputable for halting problem reasons), so I'm willing to be over by 100 bits because we fail to see a pattern.
As far variable constants: Sure, we can estimate the kolmogorov complexity where the "constants" are inputs. Or we can estimate the kolmogorov complexity of the current laws plus the state 13.7 billion years ago at the big bang. Or we can estimate the complexity of a program that runs all programs. All of these questions are interesting. But right now I want the answer to the one I asked, not the others.
edit clarified response
Technically speaking, at least for certain constants, our current model of the universe is that they are infinitely complicated, because we aren't promised that all of the constants are computable real numbers, though they are almost certainly not constants we could use to perform hypercomputation, and thus the universe is infinitely complex in the sense of Kolmogorov Complexity.
That said, I'm not sure whether physicists have come to a consensus that any future theory of quantum gravity must have real numbers that are computable.
This is assuming no expression that converges to the constants exists? Which I think is an open question. (Of course, it would only be finite if there are such expressions for all constants. But even so, I think it's an open question.)
Computer simulation of the strong interaction part of the Standard Model is a big research area: you may want to read about lattice QCD. I've written a simple lattice QCD simulation in a few hundred lines of code. If you Google a bit you can probably find some example code. The rest of the Standard Model has essentially the same structure and would only be a few more lines of code.
Assuming you're right about it only being a few more lines of code, and that you didn't use a lot of external libraries, that puts an upper bound at... a few dozen kilobits? I'm guessing that could be made a lot smaller, since you were likely not focused on minimizing the lines of code at all costs.
Attacking physics head on is going to be a bit hand-wavy, since we don't have the complete picture of a formal system that encompasses all physics yet. What do we know about estimating actual, stated-as-an-approximate-number-of-bits Kolmogorov complexities for formal systems which we can describe completely? Can we give an informed estimate about the number of bits needed for the rules of chess, or Newtonian-only toy physics?
Attacking physics head on is going to be a bit hand-wavy, since we don't have the complete picture of a formal system that encompasses all physics yet.
How about for "the standard model" instead of "known physics"? That's a much better defined problem.
What do we know about estimating actual, stated-as-an-approximate-number-of-bits Kolmogorov complexities for formal systems which we can describe completely? Can we give an informed estimate about the number of bits needed for the rules of chess, or Newtonian-only toy physics?
Yes, we can (as long as you assume a particular instruction set). I don't know if anyone's done it, though. Also important to note that we can give upper bounds, but lower bounds are extremely difficult because of the inability to prove properties like "output matches physics" of programs in general.
or whatever the size of the true laws of physics when written out as equations on a sheet of paper.
500 bits was a guess. It was not intended to be accurate. It just looks better if he uses an actual number.
I'm fine with a guess being included in the text, even one that isn't elaborated on. The post wasn't about making that particular estimate, and elaborating on it there would have been counter-productive.
However, that doesn't really answer my question. What factors need to be accounted for when estimating the kolmogorov complexity of physics? Which parts cost the most? Which parts cost almost nothing?
Start small. Think F=ma first. Then build up.
EDIT: what's with the downvotes, people? "Start small" is the first rule of any complex undertaking. Once you master the Newtonian physics estimates (not as straightforward a task as it might seem), you can see how to work up from there.
If we're talking about the size of a program to simulate the universe, isn't there good evidence that it's not Turing computable? My understanding is that solving Newton's equations for more than two bodies is not computable, to take just one example.
You're confusing chaos with incomputability. Chaos has to do with mixing, error amplification, and imperfect initial conditions. Incomputability has to do with abstract problems and whether or not they can solved by computers.
In any case, even if the standard model was an approximation of truly Turing-incomputable physics laws... I still want to know what its Kolmogorov complexity is.
My understanding is that solving Newton's equations for more than two bodies is not computable, to take just one example.
What do you mean?
The demo scene does some pretty amazing things with 4096 bits.
Clicked a couple links and got to this presentation, but I still wasn't able to figure out -- what exactly are they producing that's under 4k? Is it a media file with video and audio (and if so, what format is it in), or is it a small program that in turn produces a larger media file?
They typically produce a program that, when run, plays video and audio in real time after a loading period.
It's the program length that matters not it's time or memory performance
That's a common assumption, but does this really make sense?
It seems to me that if you have a program doing two things - simulating two people, say - then, if it has twice the overhead for the second one, the first should in some sense exist twice as much.
The same argument could be applied to universes.
Okay, I wouldn't normally do this, but.. what's with the downvote? I honestly have no idea what the problem is, which makes avoiding it hard. Please explain.
"Use of the Speed Prior has the disadvantage of leading to less optimal predictions"
Unless I'm misunderstanding something, doesn't this imply we've already figured out that that's not the true prior? Which would be very interesting indeed.
Would someone mind explaining what "the true prior" is? Given that probability is in the mind, I don't see how the concept makes sense.
I was going for "Matches what the universe is actually doing", whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.
Sure, there's no way of being sure we've figured out the correct rule; doesn't mean there isn't one.
I'm confused also. I think they may mean something like "empirically not the optimal prior we can use with a small amount of computation" but that doesn't seem consistent with how it is being used.
"empirically not the optimal prior we can use with a small amount of computation"
I'm not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.
Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don't think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.
has the disadvantage of leading to less optimal predictions
As far as I know, any computable prior will have that disadvantage relative to full uncomputable SI.
You're saying a Solomonoff Inductor would be outperformed by a variant that weighted quick programs more favorably, I think. (At the very least, it makes approximations computable.)
Whether or not penalizing for space/time cost increases the related complexity metric of the standard model is an interesting question, and there's a good chance it's a large penalty since simulating QM seems to require exponential time, but for starters I'm fine with just an estimate of the Kolmogorov Complexity.
Well, I'm saying the possibility is worth considering. I'm hardly going to claim certainty in this area.
As for QM...
The metric I think makes sense is, roughly, observer-moments divided by CPU time. Simulating QM takes exponential time, yes, but there's an equivalent exponential increase in the number of observer-moments. So QM shouldn't have a penalty vs. classical.
On the flip side this type of prior would heavily favor low-fidelity simulations, but I don't know if that's any kind of strike against it.
In the post Complexity and Intelligence, Eliezer says that the Kolmogorov Complexity (length of shortest equivalent computer program) of the laws of physics is about 500 bits:
Where did this 500 come from?
I googled around for estimates on the Kolmogorov Complexity of the laws of physics, but didn't find anything. Certainly nothing as concrete as 500.
I asked about it on the physics stack exchange, but haven't received any answers as of yet.
I considered estimating it myself, but doing that well involves significant time investment. I'd need to learn the standard model well enough to write a computer program that simulated it (however inefficiently or intractably, it's the program length that matters not it's time or memory performance).
Based on my experience programming, I'm sure it wouldn't take a million bits. Probably less than ten thousand. The demo scene does some pretty amazing things with 4096 bits. But 500 sounds like a teeny tiny amount to mention off hand for fitting the constants, the forces, the particles, and the mathematical framework for doing things like differential equations. The fundamental constants alone are going to consume ~20-30 bits each.
Does anyone have a reference, or even a more worked-through example of an estimate?