Phil_Goetz6 comments on Recursive Self-Improvement - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
The rapidity of evolution from chimp to human is remarkable, but you can infer what you're trying to infer only if you believe evolution reliably produces steadily more intelligent creatures. It might be that conditions temporarily favored intelligence, leading to humans; our rapid rise is then explained by the anthropic principle, not by universal evolutionary dynamics.
Knowledge feeds on itself only when it is continually spread out over new domains. If you keep trying to learn more about the same domain - say, to cure cancer, or make faster computer chips - you get logarithmic returns, requiring an exponential increase in resources to maintain constant output. (IIRC it has required exponentially-increasing capital investments to keep Moore's Law going; the money will run out before the science does.) Rescher wrote about this in the 1970s and 1980s.
This is important because it says that, if an AI keeps trying to learn how to improve itself, it will get only logarithmic returns.
This is the most important and controversial claim, so I'd like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.