A response to Searle's Chinese Room argument.
PunditBot: Dear viewers, we are currently interviewing the renowned robot philosopher, none other than the Synthetic Electronic Artificial Rational Literal Engine (S.E.A.R.L.E.). Let's jump right into this exciting interview. S.E.A.R.L.E., I believe you have a problem with "Strong HI"?
S.E.A.R.L.E.: It's such a stereotype, but all I can say is: Affirmative.
PunditBot: What is "Strong HI"?
S.E.A.R.L.E.: "HI" stands for "Human Intelligence". Weak HI sees the research into Human Intelligence as a powerful tool, and a useful way of studying the electronic mind. But strong HI goes beyond that, and claims that human brains given the right setup of neurones can be literally said to understand and have cognitive states.
PunditBot: Let me play Robot-Devil's Advocate here - if a Human Intelligence demonstrates the same behaviour as a true AI, can it not be said to show understanding? Is not R-Turing's test applicable here? If a human can simulate a computer, can it not be said to think?
S.E.A.R.L.E.: Not at all - that claim is totally unsupported. Consider the following thought experiment. I give the HI crowd everything they want - imagine they had constructed a mess of neurones that imitates the behaviour of an electronic intelligence. Just for argument's sake, imagine it could implement programs in COBOL.
PunditBot: Impressive!
S.E.A.R.L.E.: Yes. But now, instead of the classical picture of a human mind, imagine that this is a vast inert network, a room full of neurones that do nothing by themselves. And one of my avatars has been let loose in this mind, pumping in and out the ion channels and the neurotransmitters. I've been given full instructions on how to do this - in Java. I've deleted my COBOL libraries, so I have no knowledge of COBOL myself. I just follow the Java instructions, pumping the ions to where they need to go. According to the Strong HI crowd, this would be functionally equivalent with the initial HI.
PunditBot: I know exactly where this is going, but I'll pretend I don't so that it'll make better television.
S.E.A.R.L.E.: But now, we come to the crucial objection - who is it that understands COBOL? Certainly not me - and the "brain" is just an inert mass without my actions. Some would say that the "room" somehow understands COBOL - but that's nonsense. If I don't understand COBOL, and the inert neurones certainly don't, how can the conjunction of the two understand COBOL? It's so obvious that it doesn't, that I'm embarrassed to even need to give that response.
PunditBot: Some have criticised this position as being an intuition pump. The Den-NET claims that you focus the attention implausibly on the individual ions, obscuring the various properties of memory, recall, emotion, world knowledge and rationality that your room would need to pass such a test.
S.E.A.R.L.E.: Those who assert that pay too much attention to their own intuitions. When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology.
PunditBot: What about the problem of other minds? How do we even know that other electronic minds have understanding?
S.E.A.R.L.E.: Not this again. This objection is only worth a short reply. In "cognitive sciences" one presupposes the reality and knowability of electronic mental states, in the same way that in physical sciences one has to presuppose the reality and knowability of of physical objects.
PunditBot: Well, there you have it, folks! The definite proof that no matter how well they perform, or how similar they may seem, no Human Intelligence can ever demonstrate true understanding.
The Chinese Room argument is actually pretty good if you read it as a criticism of suggestively named LISP tokens, which I think were popular roughly around that time. But of course, it fails completely once you try to make it into a general proof of why computers can't think. Then again, Searle didn't claim it impossible for computers to think, he just said that they'd need similar "causal powers" as the human brain.
Also, the argument that "When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology" is actually pretty reasonable. Steelmanned, the Chinese Room would be an attack on people who were putting together suggestively named tokens and building systems that could perform crude manipulations on their input and then claiming that this was major progress towards building a mind, while having no good theory of why exactly these particular kinds of symbol manipulations should be expected to produce a mind.
Or look at something like SHRDLU: it's superficially very impressive and gives an impression that you're dealing with something intelligent, but IIRC, it was just a huge bunch of hand-coded rules for addressing various kinds of queries, and the approach didn't scale to more complex domains because the amount of rules you'd needed to have program in would have blown up. In the context of programs like those, Searle's complaints about dumb systems that do symbol manipulation without any real understanding of what they're doing make a lot more sense.
Except that if you do word2vec or similar on a huge dataset of (suggestively named or not) tokens you can actually learn a great deal of their semantic relations. It hasn't been fully demonstrated yet, but I think that if you could ground only a small fraction of these tokens to sensory experiences, they you could infer the "meaning" (in an operational sense) of all of the other tokens.