You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Does Kolmogorov complexity imply a bound on self-improving AI? - Less Wrong Discussion

4 Post author: contravariant 14 February 2016 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: MrMind 15 February 2016 11:20:48AM 1 point [-]

An important remark: a program that is better at a task is not necessearily more complex than a program that is worse. Case in point, AlphaGo: definitely better than almost every human at go, but definitely less complex than a human mind.

Anyway, accepting the premise:

1 is demonstrably false, for any reasonable definition of intelligence (a Turing machine that can solve a problem that another TM cannot solve);

2 is surely true, given that a program can increase in complexity given more memory and a way to do unsupervised learning;

3 is too dependent to the implementation detail to judge, but it may be trivially true for a sufficiently large gap to reach.