A response to Searle's Chinese Room argument.

PunditBot: Dear viewers, we are currently interviewing the renowned robot philosopher, none other than the Synthetic Electronic Artificial Rational Literal Engine (S.E.A.R.L.E.). Let's jump right into this exciting interview. S.E.A.R.L.E., I believe you have a problem with "Strong HI"?

S.E.A.R.L.E.: It's such a stereotype, but all I can say is: Affirmative.

PunditBot: What is "Strong HI"?

S.E.A.R.L.E.: "HI" stands for "Human Intelligence". Weak HI sees the research into Human Intelligence as a powerful tool, and a useful way of studying the electronic mind. But strong HI goes beyond that, and claims that human brains given the right setup of neurones can be literally said to understand and have cognitive states.

PunditBot: Let me play Robot-Devil's Advocate here - if a Human Intelligence demonstrates the same behaviour as a true AI, can it not be said to show understanding? Is not R-Turing's test applicable here? If a human can simulate a computer, can it not be said to think?

S.E.A.R.L.E.: Not at all - that claim is totally unsupported. Consider the following thought experiment. I give the HI crowd everything they want - imagine they had constructed a mess of neurones that imitates the behaviour of an electronic intelligence. Just for argument's sake, imagine it could implement programs in COBOL.

PunditBot: Impressive!

S.E.A.R.L.E.: Yes. But now, instead of the classical picture of a human mind, imagine that this is a vast inert network, a room full of neurones that do nothing by themselves. And one of my avatars has been let loose in this mind, pumping in and out the ion channels and the neurotransmitters. I've been given full instructions on how to do this - in Java. I've deleted my COBOL libraries, so I have no knowledge of COBOL myself. I just follow the Java instructions, pumping the ions to where they need to go. According to the Strong HI crowd, this would be functionally equivalent with the initial HI.

PunditBot: I know exactly where this is going, but I'll pretend I don't so that it'll make better television.

S.E.A.R.L.E.: But now, we come to the crucial objection - who is it that understands COBOL? Certainly not me - and the "brain" is just an inert mass without my actions. Some would say that the "room" somehow understands COBOL - but that's nonsense. If I don't understand COBOL, and the inert neurones certainly don't, how can the conjunction of the two understand COBOL? It's so obvious that it doesn't, that I'm embarrassed to even need to give that response.

PunditBot: Some have criticised this position as being an intuition pump. The Den-NET claims that you focus the attention implausibly on the individual ions, obscuring the various properties of memory, recall, emotion, world knowledge and rationality that your room would need to pass such a test.

S.E.A.R.L.E.: Those who assert that pay too much attention to their own intuitions. When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology.

PunditBot: What about the problem of other minds? How do we even know that other electronic minds have understanding?

S.E.A.R.L.E.: Not this again. This objection is only worth a short reply. In "cognitive sciences" one presupposes the reality and knowability of electronic mental states, in the same way that in physical sciences one has to presuppose the reality and knowability of of physical objects.

PunditBot: Well, there you have it, folks! The definite proof that no matter how well they perform, or how similar they may seem, no Human Intelligence can ever demonstrate true understanding.

New Comment
36 comments, sorted by Click to highlight new comments since:

The Chinese Room argument is actually pretty good if you read it as a criticism of suggestively named LISP tokens, which I think were popular roughly around that time. But of course, it fails completely once you try to make it into a general proof of why computers can't think. Then again, Searle didn't claim it impossible for computers to think, he just said that they'd need similar "causal powers" as the human brain.

Also, the argument that "When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology" is actually pretty reasonable. Steelmanned, the Chinese Room would be an attack on people who were putting together suggestively named tokens and building systems that could perform crude manipulations on their input and then claiming that this was major progress towards building a mind, while having no good theory of why exactly these particular kinds of symbol manipulations should be expected to produce a mind.

Or look at something like SHRDLU: it's superficially very impressive and gives an impression that you're dealing with something intelligent, but IIRC, it was just a huge bunch of hand-coded rules for addressing various kinds of queries, and the approach didn't scale to more complex domains because the amount of rules you'd needed to have program in would have blown up. In the context of programs like those, Searle's complaints about dumb systems that do symbol manipulation without any real understanding of what they're doing make a lot more sense.

The Chinese Room argument is actually pretty good if you read it as a criticism of suggestively named LISP tokens

Yep, that's the good predictions I managed to extract from the paper in my case studies :-)

Which reminds me, I really should get around reading the case studies. Tomorrow on the train back home, the latest.

[-]V_V00

Except that if you do word2vec or similar on a huge dataset of (suggestively named or not) tokens you can actually learn a great deal of their semantic relations. It hasn't been fully demonstrated yet, but I think that if you could ground only a small fraction of these tokens to sensory experiences, they you could infer the "meaning" (in an operational sense) of all of the other tokens.

[-]wsean150

It took me years to realize that my agreement with the Chinese Room argument was due almost entirely to how convincing Searle sounded standing in front of a classroom, and to his skill at appealing to both common sense and vanity. The argument was flagged as correct and cached, acquired a support structure of evasions and convenient flinches, and so it remained, assumed and unquestioned.

I wish I could remember an exact moment of realization that destroyed the whole thing, roots and all, but I don't. I suspect it was more of a gradual shift, where the support structure was slowly eroded by new ideas (i.e. I started reading LessWrong), and the contradictions were exposed. Until one day it came up in a discussion and I realized that, far from wanting to defend it, I didn't actually believe it anymore.

I should probably go back through my class notes (if I can find them) and see what else I might have carelessly cached.

Whereas I first encountered the Chinese Room idea via Hofstadter and Dennett, and was thus cued to conceive of it as a failure of empathy — Searle asks us to empathize with the human (doing a boring mechanical task) inside of the room, and therefore not to empathize with the (possibly inquisitive and engaged) person+room system.

This was a fun read. Reminds me of Terry Bisson's "They're made out of meat."

(I like variations on these intuition pumps involving questions like whether a depiction of a person in a video recording has understanding/consciousness, and imagining extending the video recordings in various ways until it would feel unreasonable to assert that something with consciousness/understanding isn't there.)

How is Searle's actual response to the accusation that he has just dressed up the Other Minds Problem at all satisfactory? Does anyone find it convincing?

Those who already agreed with his conclusion, much as with p-zombies.

That part of his argument is, in my opinion, the weakest part of his thesis.

Searle's response[1] :

This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state.

Talk about begging the question...


[1] Searle, John. 1980a. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, 417-424.

I used not to take Searle's arguments seriously until I actually understood what they were about.

Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.

Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.

However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.

So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.

Taboo "understanding."

It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.

For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn't have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer's search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo "subjective experience" will be missing something important.

Furthermore, I can mount this critique of such an attempt without being obliged to taboo "subjective experience" myself. That is, making the critique is valuable even if it doesn't offer an alternative way to taboo "subjective experience".

It's not clear to me that "understanding" means "subjective experience," which is one of several reasons why I think it's reasonable for me to ask that we taboo "understanding."

I didn't mean to suggest that "understanding" means "subjective experience", or to suggest that anyone else was suggesting that.

The only good taboo of understanding I've ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:

I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.

By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does--ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?

I would say I understand a system to the extent that I'm capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac's version.

you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?

IIRC, the CR as Searle describes it would include rules for responding to the question "What are likely last words that end this sentence?" in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.

And, definitionally, of doing so without understanding.

To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.

Good point--I was thinking of "figuring out the characteristics" fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn't a fully-functioning Chinese Room.

The thing is that the Chinese Room does not represent a system that could never understand. It fails at its task in the mental experiment.

The chinese room argument is really just another form of the Hard Problem of consciousness.

This is correct and deserves elaboration.

Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. "Intentionality" here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption - and for English speakers, this will come naturally - that subjective experience is necessarily entwined with "consciousness". In that case we commit to a view we could summarize as "intentionality if and only if subjective experience."

Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.

Why do I reject "intentionality if and only if subjective experience"? For one thing, there are simple states of consciousness - moods, for example - that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.

Searle's arguments fail to show that AIs in the "computationalist" conception can't think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.

Why do I reject "intentionality if and only if subjective experience"? For one thing, there are simple states of consciousness - moods, for example - that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.

I think this is a bit confused. It isn't that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I'm not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is "easy". Okay. But what does it mean to talk of intentionality, without a subject to have the intention?

"Intentionality" is an unfortunate word choice here, because it's not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within "intentionality" as Searle uses it, but it's not the only example. Your argument is still plausible and relevant, and I'll try to reply in a moment.

As you suggest, I didn't even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don't think an argument can be made, but mainly because the Less Wrong community doesn't seem to need any convincing, or didn't until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.

Now to answer your argument. I do think it's conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. "Robbie knows that the blue box fits inside the red one" is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn't have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as "appears to be red" or "seems blue" to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.

This little story seems wildly improbable, but it's not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

If that's what the Chinese Room argument says, then:

1) Either my reading comprehension is awful or Searle is awful at making himself understood.

2) Searle is so obviously right that I wonder why he bothered to create his argument.

Searle is awful at making himself understood.

Perhaps a little bit of that and a little bit of the hordes of misguided people misunderstanding his arguments and then spreading their own misinformation around. And not to mention the opportunists who sieze at the argument as a way to defend their own pseudoscientific beliefs. That was, in part, why I didn't take his argument seriously at first. I had recieved it through second-hand sources.

(In my experience what happens in practice is his perspective is unconsciously conflated with mysterianism (maybe through slippery slope reasoning) which prompts rationalized flag-wavings-dressed-as-arguments that dog whistle 'we must heap lots of positive affect on Science, it works really well' or 'science doesn't have all the answers, we have to make room for [vague intuition about institutions that respect human dignity, or something]' depending.)

Thanks for this!

One thing to keep in mind is that there is no obvious evolutionary advantage to also having some form of "understanding" other than functional capabilities. Why would we have been selected for "understanding", "aboutness", if these were a mechanism separate from just performing the task needed?

Without such an evolutionary selection pressure, how come that our capable brains also evolved into be able to "understand" and "be about something" (if these were not necessary by-products), why didn't we just become Chinese Rooms? To me the most parsimonious explanation seems to be that these capabilities go hand in hand with our functional capacity.

I hope my above point was cogently formulated, I'm forced into watching Chip and Dale right next to this window ...

just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well

I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!

[-][anonymous]00

There probably will be no way until we can figure out what consciousness is.

Depends on your definition of consciousness.

[This comment is no longer endorsed by its author]Reply
[-]Jack20

When they claim that a mind can emerge from "a system" without saying what the system is or how such a thing might give rise to a mind, then they are under the grip of an ideology.

Is this actually what Searl says?

That's from Wikipedia, but it does correspond pretty well to what Searle wrote...

This seems a lot like Haugeland's Demon, as described in Hofstadter and Dennett's The Mind's I.

If I don't understand COBOL, and the inert neurones certainly don't, how can the conjunction of the two understand COBOL?

This one appears to assume reductionism is false: the same reasoning can apply to properly organized heap of neurons. Indeed an intuition pump, for we are dualists by default.

(Come to think of it, doesn't non-reductionism come from treating our ignorance as positive knowledge? In that case, the mind seems irreducible only because we just don't know how to reduce it yet.)

[-][anonymous]-10

.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

Because they aren't typos?

[This comment is no longer endorsed by its author]Reply