You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Miller comments on The Kolmogorov complexity of a superintelligence - Less Wrong Discussion

2 Post author: Thomas 26 June 2011 12:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Miller 26 June 2011 06:01:46PM *  1 point [-]

Interesting. How does the program determine hard questions (and their answers) without qualifying as generating them itself? That is, the part about enumerating other programs seems superfluous.

[Edit added seconds later] Ok, I see perhaps it could ask something like 'predict what's gonna happen 10 seconds from now', and then wait to see if the prediction is correct after the universe runs for that long. In that case, the parent program isn't computing the answer itself.

Comment author: cousin_it 26 June 2011 06:20:23PM *  1 point [-]

You don't need to be able to generate the answer to your hard problem yourself, only to check that the superintelligence's offered answer is correct. These two abilities are equivalent if computing resources are unlimited, but you could run the superintelligence for a limited time... This line of thought seems to lead into the jungle of complexity theory and you should probably take my comments with a grain of salt :-)