Does anyone happen to know the Komogorov complexity (in some suitable, standard UTM -- or, failing that, in lines of Python or something) of computable approximations of AIXI?
I'm writing a paper on how simple or complicated intelligence is, and what implications that has for AI forecasting. In that context: adopt Shane Legg's measure of intelligence (i.e., let "intelligence" measure a system's average goal-achievement across the different "universe" programs that might be causing it to win or not win reward at each time step, weighted according to the universe program's simplicity).
Let k(x, y) denote the Kolmogorov complexity of the shortest program that attains an intelligence of at least x, when allowed an amount y of computation (i.e., of steps it gets to run our canonical UTM). Then, granting certain caveats, AIXI and approximations thereto tell us that the limit as y approaches infinity of k(x,y) is pretty small for any computably attainable value of x. (Right?)
What I'd like is to stick an actual number, or at least an upper bound, on "pretty small".
If someone could help me out, I'd be much obliged.
Hi.
A couple of comments from the author of mc-aixi(fac-ctw).
There is a massive difference in generalisation ability between Solomonoff Induction and the scaled down Bayes mixture used by mc-aixi(fac-ctw). The mixture used in this approximation scales with more CPU power, however it will never get anywhere near the full power of Solomonoff Induction. The class of environments the agent supports is described in more detail in the paper Louie references.
Let's be clear though - 100 years of Moores Law would significantly improve the performance of even the current implementation of the agent. However it is important to not forget about data efficiency. mc-aixi(fac-ctw) can only exploit very simple structure given reasonable amounts of data. If this structure doesn't exist, the performance will not be much better than learning by rote. Thus Louie's comment about image recognition is spot on. Future versions will be able to generalise more effectively for certain kinds of data, however expect progress to be very incremental.
For what it's worth, the next version of mc-aixi will contain about 3x more lines of source code. It will also be restricted to toy domains, however they will be significantly more interesting than what is currently published.
Happy to answer any other questions.
Cheers, Joel