hen comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (184)
Certainly the Turing test can be viewed as an operationalization of "does this machine think?". No argument there. I also agree with you concerning what Turing probably had in mind.
The problem is that if we have in mind (perhaps not even explicitly) some different definition of thinking or, gods forbid, some other property entirely, like "consciousness", then the Turing test immediately stops being of much use.
Here is a related thing. John Searle, in his essay "Minds, Brains, and Programs" (where he presents the famous "Chinese room" thought experiment), claims that even if you a) place the execution of the "Chinese room" program into a robot body, which is then able to converse with you in Chinese, or b) simulate the entire brain of a native Chinese speaker neuron-by-neuron, and optionally put that into a robot body, you will still not have a system that possesses true understanding of Chinese.
Now, taken to its logical extreme, this is surely an absurd position to take in practice. We can imagine a scenario where Searle meets a man on the street, strikes up a conversation (perhaps in Chinese), and spends some time discoursing with the articulate stranger on various topics from analytic philosophy to dietary preferences, getting to know the man and being impressed with his depth of knowledge and originality of thought, until at some point, the stranger reaches up and presses a hidden button behind his ear, causing the top of his skull to pop open and reveal that he is in fact a robot with an electronic brain! Dun dun dun! He then hands Searle a booklet detailing his design specs and also containing the entirety of his brain's source code (in very fine print), at which point Searle declares that the stranger's half of the entire conversation up to that point has been nothing but the meaningless blatherings of a mindless machine, devoid entirely of any true understanding.
It seems fairly obvious to me that such entities would, like humans, be beneficiaries of what Turing called "the polite convention" that people do, in fact, think (which is what lets us not be troubled by the problem of other minds in day-to-day life). But if someone like John Searle were to insist that we nonetheless have no direct evidence for the proposition that the robots in question do "think", I don't see that we would have a good answer for him. (Searle's insistence that we shouldn't question whether humans can think is, of course, hypocritical, but that is not relevant here.) Social conventions to treat something as being true do not constitute a demonstration that said thing is actually true.
This seems like a slightly uncharitable reading of Searle's position.
Searle's steadfast refusal to consider perfectly reasonable replies to his position, and his general recalcitrance in the debate on this and related questions, makes him unusually vulnerable to slightly uncharitable readings. The fact that his justification seems to be "human brains have unspecified magic that make humans conscious, and no I will not budge from that position because I have very strong intuitions" means, I think, that my reading is not even very uncharitable.