Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

JamesAndrix comments on Recursive Self-Improvement - Less Wrong

14 Post author: Eliezer_Yudkowsky 01 December 2008 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JamesAndrix 02 December 2008 06:00:27AM 0 points [-]

Hal: At some point, you've improved a computer program. You had to decide, somehow, what tradeoffs to make, on your own. We should assume that a superhuman AI will be at least as good at improving programs as we are.

I can't think of any programs of broad scope that I would call unimprovable. (The AI might not be able to improve this algorithm this iteration, but if it really declares itself perfectly optimized, I'd expect we would declare it broken. In fact that sounds like the EURISKO. An AGI should a least keep trying.)

Also: Any process that it knows how to do, that it has learned, it can implement in its own code, so it does not have to 'think things out' with it's high-level thinking algorithms. This is repeatable for everything it learns. (We can't do the same thing to create an AI because we don't have access to our algorithms or really even our memories. If an AI can learn to recognize breeds of dogs, then it can trace its own thoughts to determine by what process it does that. Since the learning algorithm probably isn't perfectly optimized to learn how to recognize dogs, the learned process it is using is probably not perfectly efficient.)

The metacognitive level becoming part of the object level lets you turn knowledge and metaknowledge directly into cognitive improvements. For every piece of knowledge, including knowledge about how to program.