Stuart_Armstrong comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: SaidAchmiz 20 May 2013 07:27:16PM 3 points [-]

Certainly the Turing test can be viewed as an operationalization of "does this machine think?". No argument there. I also agree with you concerning what Turing probably had in mind.

The problem is that if we have in mind (perhaps not even explicitly) some different definition of thinking or, gods forbid, some other property entirely, like "consciousness", then the Turing test immediately stops being of much use.

Here is a related thing. John Searle, in his essay "Minds, Brains, and Programs" (where he presents the famous "Chinese room" thought experiment), claims that even if you a) place the execution of the "Chinese room" program into a robot body, which is then able to converse with you in Chinese, or b) simulate the entire brain of a native Chinese speaker neuron-by-neuron, and optionally put that into a robot body, you will still not have a system that possesses true understanding of Chinese.

Now, taken to its logical extreme, this is surely an absurd position to take in practice. We can imagine a scenario where Searle meets a man on the street, strikes up a conversation (perhaps in Chinese), and spends some time discoursing with the articulate stranger on various topics from analytic philosophy to dietary preferences, getting to know the man and being impressed with his depth of knowledge and originality of thought, until at some point, the stranger reaches up and presses a hidden button behind his ear, causing the top of his skull to pop open and reveal that he is in fact a robot with an electronic brain! Dun dun dun! He then hands Searle a booklet detailing his design specs and also containing the entirety of his brain's source code (in very fine print), at which point Searle declares that the stranger's half of the entire conversation up to that point has been nothing but the meaningless blatherings of a mindless machine, devoid entirely of any true understanding.

It seems fairly obvious to me that such entities would, like humans, be beneficiaries of what Turing called "the polite convention" that people do, in fact, think (which is what lets us not be troubled by the problem of other minds in day-to-day life). But if someone like John Searle were to insist that we nonetheless have no direct evidence for the proposition that the robots in question do "think", I don't see that we would have a good answer for him. (Searle's insistence that we shouldn't question whether humans can think is, of course, hypocritical, but that is not relevant here.) Social conventions to treat something as being true do not constitute a demonstration that said thing is actually true.

Comment author: Stuart_Armstrong 21 May 2013 12:03:00PM 0 points [-]

I agree with you concerning Searle's errors (see my takes on Searle at http://lesswrong.com/lw/ghj/searles_cobol_room/ http://lesswrong.com/lw/gyx/ai_prediction_case_study_3_searles_chinese_room/ )

I think the differences between us are rather small, in fact. I do have a different definition of thinking, which is not fully explicit. It would go along the lines of "a thinking machine should demonstrate human-like abilities in most situations and not be extremely stupid in some areas". The intuition is that if there is a general intelligence, rather than simply a list of specific rules, then it's competence shouldn't completely collapse when facing unusual situations.

The "test systems on situations they're not optimised" approach was trying to establish whether there would be such a collapse in skill. Of course you can't test for every situation, but you can get a good idea this way.