cousin_it comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong

11 Post author: Prismattic 29 April 2011 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 03 May 2011 11:09:12AM *  3 points [-]

Here's an example from the paper that helps illustrate the difference: if the sequence is a gigabyte of random data repeated forever, it can be predicted with finitely many errors by the simple program "memorize the first gigabyte of data and then repeat it forever", though the sequence itself has high K-complexity.

Comment author: Thomas 03 May 2011 11:25:45AM -1 points [-]

No it has not. The algorithm for copying the first GB forever is small and the Kolmogorov's complexity is just over 1GB.

For the entire sequence.

Comment author: cousin_it 03 May 2011 11:32:20AM *  3 points [-]

Yes, but the predictor's complexity is much lower than 1GB.

The paper also gives an example of a single predictor that can learn to predict any eventually periodic sequence, no matter how long the period.

Comment author: Thomas 03 May 2011 11:42:16AM 0 points [-]

Predictor should remember what happened. It has learned. Now it's 1 GB heavy.

Comment author: cousin_it 03 May 2011 11:50:18AM *  4 points [-]

It looks like you just dislike the definitions in the paper and want to replace them with your own. I'm not sure there's any point in arguing about that.