Manfred comments on Does Kolmogorov complexity imply a bound on self-improving AI? - Less Wrong

4 Post author: contravariant 14 February 2016 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 14 February 2016 12:59:26PM *  1 point [-]

A box that runs all possible turing machines may contain simulations of every finite intelligence, but in terms of actually interacting with the world it's going to be slightly less effective than a rock. You could probably fix this by doing something like approximate AIXI, but even if it is possible to evade thermodynamics, all of this takes infinite information storage, which seems even less likely.

Comment author: Gurkenglas 14 February 2016 04:04:26PM *  1 point [-]

That box is merely a proof that the intelligence of patterns in a nonhalting Turing machine needs not be bounded. If we cannot get infinite space/time, we run into sooner problems than Kolgomorov complexity. (As I understood it, OP was about how even infinite ressources cannot escape the complexity our laws of physics dictate.)