You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SoerenE comments on Does Kolmogorov complexity imply a bound on self-improving AI? - Less Wrong Discussion

4 Post author: contravariant 14 February 2016 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: SoerenE 14 February 2016 07:59:04PM *  0 points [-]

It is an interesting way of looking at the maximal potential of AIs. It could be that Oracle Machines are possible in this universe, but an AI built by humans cannot self-improve to that point because of the bound you are describing.

I feel that the phrasing "we have reached the upper bound on complexity" and later "can rise many orders of magnitude" gives a potentially misleading intuition about how limiting this bound is. Do you agree that this bound does not prevent us from building "paperclipping" AIs?