The Kolmogorov complexity ("K") of a string ("S") specifies the size of the smallest Turing machine that can output that string. If a Turing machine (equivalently, by the Church-Turing thesis, any AI) has size smaller than K, it can rewrite its code as much as it wants to, it won't be able to output S. To be specific, of course it can output S by enumerating all possible strings, but it won't be able to decide on S and output it exclusively among the options available. Now suppose that S is the source code for an intelligence strictly better than all those with complexity <K. Now, we are left with 3 options:
- The space of all maximally intelligent minds has an upper bound on complexity, and we have already reached it.
- The universe contains new information that can be used to build minds of greater complexity, or:
- There are levels of intelligence that are impossible for us to reach.
The Kolmogorov complexity of AGI is really low. You just specify a measure of intelligence, like the universal intelligence test. Then you specify a program which runs this test, on every possible program, testing them one at a time for some number of steps. Then it returns the best program found after some huge number of steps.
I think Shane Legg's universal intelligence itself involves Kolmogorov complexity, so it's not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)
In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don't the "no free lunch" theorems show that you need to know what the environment is like in order to do that? Intuitively, that's what should cause the Kolmogorov complexity to go up.