The Kolmogorov complexity ("K") of a string ("S") specifies the size of the smallest Turing machine that can output that string. If a Turing machine (equivalently, by the Church-Turing thesis, any AI) has size smaller than K, it can rewrite its code as much as it wants to, it won't be able to output S. To be specific, of course it can output S by enumerating all possible strings, but it won't be able to decide on S and output it exclusively among the options available. Now suppose that S is the source code for an intelligence strictly better than all those with complexity <K. Now, we are left with 3 options:

 

 

  1. The space of all maximally intelligent minds has an upper bound on complexity, and we have already reached it. 
  2. The universe contains new information that can be used to build minds of greater complexity, or:
  3. There are levels of intelligence that are impossible for us to reach.

Now, this isn't meant to be a practical argument against AI being useful. I have no doubt that from just applying the principles humanity has shown we can apply already, at the speed of integrated circuits, we can rise many orders of magnitude in intellectual and productive ability. But it's a serious wake-up call to anyone who thinks that a self-improving AI will achieve anything that can possibly be achieved. 

 

New Comment
14 comments, sorted by Click to highlight new comments since:
[-]gjm50

The relevant value of K here is something like the total number of bits of information embodied in, and available to, the whole human race, including whatever other bits of the universe we may happen to recruit for our use. So K might be extremely large.

Now let me remind you that the size and (in ordinary informal terms) complexity, and indeed the speed, of a hypothetical Thinking Device scarcely influence its Kolmogorov complexity. If it takes some number of bits to specify a machine with 3^^^3 bits of memory or 3^^^3 artificial neurons, it takes only a few bits more to replace 3^^^3 with 3^^^^3.

So I'm going to go with "option 1, for all practical purposes". All it requires is that there be a recipe for How To Think that can be stated in (let's say) not too many orders of magnitude more space than the entire intellectual output of the human race to date, given that we're allowed to equip it with essentially-arbitrary amounts of actual hardware, such that other ways of thinking aren't in principle more powerful in the sense that they do better no matter how much hardware one throws at ours. That seems pretty plausible to me.

Eliezer wrote a blog post about this question!

An important remark: a program that is better at a task is not necessearily more complex than a program that is worse. Case in point, AlphaGo: definitely better than almost every human at go, but definitely less complex than a human mind.

Anyway, accepting the premise:

1 is demonstrably false, for any reasonable definition of intelligence (a Turing machine that can solve a problem that another TM cannot solve);

2 is surely true, given that a program can increase in complexity given more memory and a way to do unsupervised learning;

3 is too dependent to the implementation detail to judge, but it may be trivially true for a sufficiently large gap to reach.

The Kolmogorov complexity of AGI is really low. You just specify a measure of intelligence, like the universal intelligence test. Then you specify a program which runs this test, on every possible program, testing them one at a time for some number of steps. Then it returns the best program found after some huge number of steps.

I think Shane Legg's universal intelligence itself involves Kolmogorov complexity, so it's not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)

In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don't the "no free lunch" theorems show that you need to know what the environment is like in order to do that? Intuitively, that's what should cause the Kolmogorov complexity to go up.

He made an actual test of it, that involved generating random brainfuck programs. And then tested various reinforcement learning algorithms on it to measure their intelligence, and even tested humans.

That is an actual computable test that can be run.

The no free lunch theorems apply to a completely uninformative prior. We have a prior. The Solomonoff prior, where you assume the environment was generated by a computer program. And that simpler programs are more likely than more complex ones. With that, some AI programs will be objectively better than others. You can have a free lunch.

The output of this procedure would be at least as good as the best approximation of AIXI we can make with the same amount of computing power. In fact it basically would be the best approximation of AIXI possible, since it assumes the same prior and task.

Though of course it's totally impractical, since it would require unimaginably huge computers to perform this brute force search.

I hate to say it, but this seems like an empty triviality. People have already mentioned the old post of Eliezer's. People have also touched on the fact that a human brain seems more complex (read:inefficient and arbitrary) than I would expect a good self-improving AGI to be. This at least suggests that the OP does little to illuminate the problem.

If we want to talk about technicalities of dubious relevance, I don't think the definition of Kolmogorov complexity logically requires what you need it to mean. The Turing machine does not need to list "all possible strings" in order to evade the problem; technically it just has to output something other than the solution in addition to the string S. This may turn out to matter somehow when it comes to the credibility of option #2, eg by allowing empirical tests.

Keep in mind that adding a 'random number' instruction to a turing machine allows it to create output of infinite complexity, and that pretty much all compute hardware these days contains a hard RNG based on quantum randomness.

You are assuming that the Turing machine needs to halt. In a universe much simpler than ours (?), namely the one where a single Turing machine runs, if you subscribe to Pattern Identity Theory, there's a simple way to host an infinite hierarchy of increasing intelligences: Simply run all Turing machines in parallel. (Using diagonalization from Hilbert's Hotel to give everyone infinite steps to work with.) The machine won't ever halt, but it doesn't need to. If an AGI in our universe can figure out a way to circumvent the heat death, it could do something similar.

A box that runs all possible turing machines may contain simulations of every finite intelligence, but in terms of actually interacting with the world it's going to be slightly less effective than a rock. You could probably fix this by doing something like approximate AIXI, but even if it is possible to evade thermodynamics, all of this takes infinite information storage, which seems even less likely.

That box is merely a proof that the intelligence of patterns in a nonhalting Turing machine needs not be bounded. If we cannot get infinite space/time, we run into sooner problems than Kolgomorov complexity. (As I understood it, OP was about how even infinite ressources cannot escape the complexity our laws of physics dictate.)

An interesting question.

If we write an N-bit program that runs without interacting with the world, then no matter what programming language we use, there are only about 2^N possible outputs. But it this program has made O bits of observations and has taken A bits of past actions in the environment, there are more like 2^(N+O+A) possible outcomes - and O and A can be huge compared to N. (Although that number is an overestimate - what we should add to N is the entropy of (O,A).)

So in terms of the K-complexity of artifacts that an AI can build, or output it can produce, the complexity can be incredibly (though not infinitely) high thanks to the AI's ability to interact with the environment. The real world's limits on memory and computational time will probably be stricter bounds than the limit on K-complexity.

But there's still only 2^N programs we can write. One might hype this fact up by saying that any AI worth its salt can do things too complex to have been pre-programmed. Or one might be pessimistic about it - if we want the AI to have complicated properties across a broad range of environments, that complexity is probably going to have to come from the code, and even slightly wrong code could throw us into a completely different region of the 2^(N+O+A) possible outcomes.

What what sorts of output strings are you missing?

Calculating Kolmogorov complexities is hard because because it is hard differentiate between programs that run a long time and halt and programs that run a long time and never halt.

If God gave you a 1.01 MB text file and told you "This program computes BB(1000000)", then you could easily write a program to find the Kolmogorov complexity of any string less then 1 MB.

kolmogorov_map = defaultdict(lambda x : infinity)

for all strings *p* less than 1000000:

    run *p* for at most BB(1000000) steps

    save output to *o*

    if (*p* halted and kolmogorov_map[*o*] > len(p):

        kolmogorov_map[*o*] = len(p) # found smaller program

    else:
        # *p* does not ever ever halt and has no output

Replace BB(1000000) with a smaller number, say A(Graham's number, Graham's number), and this calculator works for all programs which halt in less than A(Graham's number, Graham's number) steps. That includes pretty much every program I care about! For instance, it includes every program which could run under known physics in the age of the universe.

It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.

I feel that the phrasing "we have reached the upper bound on complexity" and later "can rise many orders of magnitude" gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building "paperclipping" AIs?