pjeby comments on Open Thread: July 2009 - Less Wrong

3 [deleted] 02 July 2009 04:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (235)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 07 July 2009 04:16:19PM *  6 points [-]

For example, it seems that as soon as a computer can reliably outperform humans at some task, we drop that task from our intuitive definition of "task demonstrating true intelligence".

And the reason for that is simple - the real working definition of "intelligence" in our brains is something like, "that invisible quality our built-in detectors label as 'mind' or 'agency'". That is, intelligence is an assumed property of things that trip our "agent" detector, not a real physical quality.

Intuitively, we can only think of something as being intelligent, to the extent that it seems "animate". If we discover that the thing is not "animate", then our built-in detectors stop considering it an agency... in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would've needed to discern an accidental branch movement from the activity of an intelligent predator-agent.

So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat's workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not "intelligent" any more.

This is one reason why it's really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting "intelligence" onto its own mechanical processes. (Which is why we have oxymoronic terms like "unconscious mind", and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to "control" them in the way a human agent might, rather than as a thermostat does.)

So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of "mind". To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)