Bugmaster comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (184)
I wasn't intending to make a claim quite that broad; but now that you mention it, I am going to answer "yes" -- because in the process of attempting to predict the behavior, we will inevitably end up building some model of the agent. This is no different from predicting the behaviors of, say, rocks.
If I see an object whose behavior is entirely consistent with that of a roughly round rock massing about 1kg, I'm going to go ahead and assume that it's a round-ish 1kg rock. In reality, this particular rock may be an alien spaceships in disguise, or in fact all rocks could be alien spaceships in disguise, but I'm not going to jump to that conclusion until I have some damn good reasons to do so.
My point is not that the Turing Test is a serious high-caliber philosophical tool, but rather that the question "is this agent a person" is a lot simpler than philosophers make it out to be.