army1987 comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 19 May 2013 01:56:51PM 1 point [-]

This isn't quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they're a default of some kind -- and that this doesn't necessarily indicate a lack of deep understanding of a system's behavior. Programmers often talk about software they're working on in agent-like terms -- the component remembers this, knows about that, has such-and-such a purpose in life -- but this doesn't correlate with imperfect understanding of the software; it's just a convenient way of thinking about the problem. Likewise for people -- I'm not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows' emotions as less real for understanding them better than I do.

See also