Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

John_Maxwell2 comments on Recursive Self-Improvement - Less Wrong

14 Post author: Eliezer_Yudkowsky 01 December 2008 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: John_Maxwell2 02 December 2008 06:35:23PM 4 points [-]

One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf "The Logic of Reliable Inquiry" by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.