Kawoomba comments on Thoughts On The Relationship Between Life and Intelligence - Less Wrong

-4 [deleted] 14 March 2013 04:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 14 March 2013 05:32:19PM 0 points [-]

To be intelligent, a system has to have goals - it has to be an agent. (I don't think this is controversial).

Is a newborn baby human, or a human of any age who is asleep, intelligent by this definition?

Comment author: Kawoomba 14 March 2013 05:38:30PM 0 points [-]

Do goals always have to be consciously chosen? When you have simple if-then clauses, such as "if (stimulusOnLips) then StartSuckling()", doesn't that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as "maintain the body in working order". Does that count?

I can see "goal" being sensibly defined either way, so it may be best not to insist on "must be consciously formulated" for the purposes of this post, then move on.

Comment author: Qiaochu_Yuan 14 March 2013 05:55:29PM *  1 point [-]

My impression is that this is not how AI researchers use the word "goal." The kind of agent you're describing is a "reflex agent": it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.

Comment author: Kawoomba 14 March 2013 06:04:19PM 0 points [-]

Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I'm fine with choosing it and going with that - avoiding definitional squabbles - but it wasn't clear prima facie (hence the grandparent).

Comment author: IsaacLewis 14 March 2013 06:14:28PM 0 points [-]

No, they don't have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say "describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings"). Goals are necessary but not sufficient for intelligence.

Comment author: Kawoomba 14 March 2013 06:18:29PM 0 points [-]

Which answers Trevor's initial question.