Gregory_Conen comments on Recognizing Intelligence - Less Wrong

10 Post author: Eliezer_Yudkowsky 07 November 2008 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Gregory_Conen 08 November 2008 11:31:22PM 0 points [-]

You mentioned earlier that intelligence also optimizes for subgoals: tasks that indirectly lead to terminal value, without being directly tied to it. These subgoals would likely be easier to guess at than the ultimate terminal values.

For example, a high-amperage high temperature superconductor, especially with significant current flowing through it, is highly unlikely to have occurred by chance. It is also very good at carrying electrons from one place to another. Therefore, it seems useful to hypothesize that it is the product of an optimization process, aiming to transport electrons. It might be a terminal goal (because somebody programmed a superintelligent AI to "build this circuit"), or more likely it is a subgoal. Either way, it implies the presence of intelligence.