There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.
The problem is Campbell's law (or Goodhart's law):
The more any quantitative
socialindicator is used forsocialdecision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt thesocialprocesses it is intended to monitor."
This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".
You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.
The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.
But this is not the case: nearly everyone's heard of the Turing test. So the first machines to pass will be dedicated systems, specifically designed to get through the test. Their whole setup will be constructed to maximise "passing the test", not to "being intelligent" or whatever we want the test to measure (the fact we have difficulty stating what exactly the test should be measuring shows the difficulty here).
Of course, this is a matter of degree, not of kind: a machine that passed the Turing test would still be rather nifty, and as the test got longer, and more complicated, as the interactions between subject and judge got more intricate, our confidence that we were facing a truly intelligence machine would increase.
But degree can go a long way. Watson won on Jeopardy without exhibiting any of the skills of a truly intelligent being - apart from one: answering Jeopardy questions. With the rise of big data and statistical algorithms, I would certainly rate it as plausible that we could create beings that are nearly perfectly conscious from a (textual) linguistic perspective. These "super-chatterbots" could only be identified as such with long and tedious effort. And yet they would demonstrate none of the other attributes of intelligence: chattering is all they're any good at (if you ask them to do any planning, for instance, they'll come up with designs that sound good but fail: they parrot back other people's plans with minimal modifications). These would be the closest plausible analogues to p-zombies.
The best way to avoid this is to create more varied analogues of the Turing test - and to keep them secret. Just as you keep the training set and the test set distinct in machine learning, you want to confront the putative AIs with quasi-Turing tests that their designers will not have encountered or planed for. Mix up the test conditions, add extra requirements, change what is being measured, do something completely different, be unfair: do things that a genuine intelligence would deal with, but an overtrained narrow statistical machine couldn't.
Not necessarily. Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it.
Then, imagine someone whose entire knowledge of Chinese is the translation of the phrase: "Does my reply make sense in the context of this conversation?" This person takes an arbitrary amount of time, randomly combining phonemes and carrying out every conceivable conversation with an unlimited supply of Chinese speakers. (This is substantially more realistic if there are many people working in a field with fewer potential combinations than language). Through perhaps the least efficient trial and error possible, they learn to carry on a conversation by rote, keeping only those conversational threads which, through pure chance, make sense throughout the entire dialogue.
In neither of these human experts do we find a real understanding of Chinese. It could be said that the understandings of the domain experts combine to form one great understanding, but the inefficient trial-and-error GLUT manufacturers certainly do not have any understanding, merely memory.
I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.
And this is before we mention the entirely plausible claim that the room-person ... (read more)