Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.
Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.
To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.
If that's what the Chinese Room argument says, then:
1) Either my reading comprehension is awful or Searle is awful at making himself understood.
2) Searle is so obviously right that I wonder why he bothered to create his argument.
Searle is awful at making himself understood.
Perhaps a little bit of that and a little bit of the hordes of misguided people misunderstanding his arguments and then spreading their own misinformation around. And not to mention the opportunists who sieze at the argument as a way to defend their own pseudoscientific beliefs. That was, in part, why I didn't take his argument seriously at first. I had recieved it through second-hand sources.
A response to Searle's Chinese Room argument.
PunditBot: Dear viewers, we are currently interviewing the renowned robot philosopher, none other than the Synthetic Electronic Artificial Rational Literal Engine (S.E.A.R.L.E.). Let's jump right into this exciting interview. S.E.A.R.L.E., I believe you have a problem with "Strong HI"?
S.E.A.R.L.E.: It's such a stereotype, but all I can say is: Affirmative.
PunditBot: What is "Strong HI"?
S.E.A.R.L.E.: "HI" stands for "Human Intelligence". Weak HI sees the research into Human Intelligence as a powerful tool, and a useful way of studying the electronic mind. But strong HI goes beyond that, and claims that human brains given the right setup of neurones can be literally said to understand and have cognitive states.
PunditBot: Let me play Robot-Devil's Advocate here - if a Human Intelligence demonstrates the same behaviour as a true AI, can it not be said to show understanding? Is not R-Turing's test applicable here? If a human can simulate a computer, can it not be said to think?
S.E.A.R.L.E.: Not at all - that claim is totally unsupported. Consider the following thought experiment. I give the HI crowd everything they want - imagine they had constructed a mess of neurones that imitates the behaviour of an electronic intelligence. Just for argument's sake, imagine it could implement programs in COBOL.
PunditBot: Impressive!
S.E.A.R.L.E.: Yes. But now, instead of the classical picture of a human mind, imagine that this is a vast inert network, a room full of neurones that do nothing by themselves. And one of my avatars has been let loose in this mind, pumping in and out the ion channels and the neurotransmitters. I've been given full instructions on how to do this - in Java. I've deleted my COBOL libraries, so I have no knowledge of COBOL myself. I just follow the Java instructions, pumping the ions to where they need to go. According to the Strong HI crowd, this would be functionally equivalent with the initial HI.
PunditBot: I know exactly where this is going, but I'll pretend I don't so that it'll make better television.
S.E.A.R.L.E.: But now, we come to the crucial objection - who is it that understands COBOL? Certainly not me - and the "brain" is just an inert mass without my actions. Some would say that the "room" somehow understands COBOL - but that's nonsense. If I don't understand COBOL, and the inert neurones certainly don't, how can the conjunction of the two understand COBOL? It's so obvious that it doesn't, that I'm embarrassed to even need to give that response.
PunditBot: Some have criticised this position as being an intuition pump. The Den-NET claims that you focus the attention implausibly on the individual ions, obscuring the various properties of memory, recall, emotion, world knowledge and rationality that your room would need to pass such a test.
S.E.A.R.L.E.: Those who assert that pay too much attention to their own intuitions. When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology.
PunditBot: What about the problem of other minds? How do we even know that other electronic minds have understanding?
S.E.A.R.L.E.: Not this again. This objection is only worth a short reply. In "cognitive sciences" one presupposes the reality and knowability of electronic mental states, in the same way that in physical sciences one has to presuppose the reality and knowability of of physical objects.
PunditBot: Well, there you have it, folks! The definite proof that no matter how well they perform, or how similar they may seem, no Human Intelligence can ever demonstrate true understanding.