Nornagest comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nornagest 18 May 2013 10:44:59PM *  3 points [-]

I seem to ascribe emotions to a system -- more generally, I ascribe cognitive states, motives, and an internal mental life to a system -- when its behavior is too complicated for me to account for with models that don't include such things.

This isn't quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they're a default of some kind -- and that this doesn't necessarily indicate a lack of deep understanding of a system's behavior. Programmers often talk about software they're working on in agent-like terms -- the component remembers this, knows about that, has such-and-such a purpose in life -- but this doesn't correlate with imperfect understanding of the software; it's just a convenient way of thinking about the problem. Likewise for people -- I'm not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows' emotions as less real for understanding them better than I do.

(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example -- I don't say "where is my car parked?", I say "where am I parked?".)

Comment author: [deleted] 19 May 2013 01:56:51PM 1 point [-]

This isn't quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they're a default of some kind -- and that this doesn't necessarily indicate a lack of deep understanding of a system's behavior. Programmers often talk about software they're working on in agent-like terms -- the component remembers this, knows about that, has such-and-such a purpose in life -- but this doesn't correlate with imperfect understanding of the software; it's just a convenient way of thinking about the problem. Likewise for people -- I'm not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows' emotions as less real for understanding them better than I do.

See also