You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gurkenglas comments on Does Kolmogorov complexity imply a bound on self-improving AI? - Less Wrong Discussion

4 Post author: contravariant 14 February 2016 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: Gurkenglas 14 February 2016 11:23:46AM 1 point [-]

You are assuming that the Turing machine needs to halt. In a universe much simpler than ours (?), namely the one where a single Turing machine runs, if you subscribe to Pattern Identity Theory, there's a simple way to host an infinite hierarchy of increasing intelligences: Simply run all Turing machines in parallel. (Using diagonalization from Hilbert's Hotel to give everyone infinite steps to work with.) The machine won't ever halt, but it doesn't need to. If an AGI in our universe can figure out a way to circumvent the heat death, it could do something similar.

Comment author: Manfred 14 February 2016 12:59:26PM *  1 point [-]

A box that runs all possible turing machines may contain simulations of every finite intelligence, but in terms of actually interacting with the world it's going to be slightly less effective than a rock. You could probably fix this by doing something like approximate AIXI, but even if it is possible to evade thermodynamics, all of this takes infinite information storage, which seems even less likely.

Comment author: Gurkenglas 14 February 2016 04:04:26PM *  1 point [-]

That box is merely a proof that the intelligence of patterns in a nonhalting Turing machine needs not be bounded. If we cannot get infinite space/time, we run into sooner problems than Kolgomorov complexity. (As I understood it, OP was about how even infinite ressources cannot escape the complexity our laws of physics dictate.)