Gregory_Conen comments on Recognizing Intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
You mentioned earlier that intelligence also optimizes for subgoals: tasks that indirectly lead to terminal value, without being directly tied to it. These subgoals would likely be easier to guess at than the ultimate terminal values.
For example, a high-amperage high temperature superconductor, especially with significant current flowing through it, is highly unlikely to have occurred by chance. It is also very good at carrying electrons from one place to another. Therefore, it seems useful to hypothesize that it is the product of an optimization process, aiming to transport electrons. It might be a terminal goal (because somebody programmed a superintelligent AI to "build this circuit"), or more likely it is a subgoal. Either way, it implies the presence of intelligence.