In response to the question of how a general intelligence could be recognised, Alan Turing proposed the following empirical test: Any entity that could interact with an investigator, fooling her into thinking it was a person, would be ascribed intelligence.

Searle's Chinese room thought experiment rejects Turing’s test, denying that a computer could under any circumstances be said to have intelligence. Searle compared a computer’s actions with those of a technician whose job it is to respond to messages presented in some unfamiliar script. The technician consults a list of procedures and executes some prescribed action. Searle thought that the actions of a computer were necessarily comparable to those of the technician, denying that any understanding was taking place.

I want to challenge Searle’s contention by arguing that his assumptions about the capabilities of a general intelligence are far too stunted.


Humans start off life just like the technician; confronted with a stream of nearly incoherent inputs (unintelligible sound waves, patterns of light and dark; or, alternatively: neural activity). In a sense, we are worse off, since we initially don’t have much of a repertoire of procedures to guide our behaviour. But we do have one advantage. Namely, the ability to learn.

A baby tries first one thing, then another, receiving uninterrupted feedback from his environment. With experience, the newborn learns the importance of context: it mostly pays off to reach into the cookie bag, but not when there are signs of a hungry animal inside.

What is our baby doing in his explorations? He is building a complex control system, with a dense matrix of inhibitory and activating responses. When the control system reaches a certain arbitrary threshold, we say our (no longer) baby has achieved intelligence.

Now come back to the technician/computer program. Instead of having him rely slavishly on a list of procedures, let’s have him initially respond to assignments on the basis of some bare heuristics. In response to a request, he initiates some behaviour and is rewarded or punished. He takes note of the effects, as well as the context, and sets up hypotheses accounting for the outcomes. This is all within the range of possibilities for computer programs.

Our technician continues refining his hypothesis and, over time, gets quite good at obtaining rewards and avoiding punishments. An advisor could speed the process by getting him to correct his hypotheses. But, given enough experience, the technician is able to perfect his skills on his own.

Has our technician mastered Chinese? Absolutely! If you’ll don’t agree, tell me what you think is still missing.

Just like a human, the computer program achieves intelligence once it passes some threshold of complexity in successfully navigating its environment. Intelligence is a matter of the range and density of his hypothesis. It has nothing to do with its realisation in a biological organism.

Why is Searle’s Chinese Room so persuasive and how did it fool people for so long? Searle rigged his analogy to in two important ways:

- He inappropriately treats intelligence as a discrete variable, then assumes an extremely narrow range of inputs and outputs where learning is impossible.

- He plays upon normal human anxieties about being compared with an inanimate object, scaring potential critics from questioning his analogy.

New Comment
4 comments, sorted by Click to highlight new comments since:

Searle meant the mechanically performing technician as an analogy for the mechanical, deterministic processes in a computer. You cannot reject Searle by magically introducing computation which is outside of the symbol lookup table, just like in a computer, there is no computation happening outside of the computer's circuits.

Now, the mistake that Searle made was much more trivial and embarrassing. From Wikipedia:

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?

There is no empirical difference underlying the conundrum. If Searle was made to explain what he means by "literally understand" and how it differs from "merely simulating", the problem would dissolve.

[-]TAG20

There is no empirical difference underlying the conundrum

Lots of things disappear if you restrict empiricism to the outside, objective view. Subjectively, there is a difference between doing something by rote and really understanding it.

[-]jmh20

I've often thought the best test of the present of an intelligence would not be "can it solve a problem someone gave it to solve" but rather can it identify a problem for itself to be solved without some external intelligence posing the problem. In other words, can it start asking and answering it's own questions.

If you put an intelligent actor in the room with no book and task it with coming up with the book the actor might very well pull that stunt off. However in order for the technician to be relevantly analogous to a computer he can not employ original ingeniuty. Reference to the symbol replacement table must be adequate reference to uniquely determine how the program will run. You can not consult "extra source code". Thus you need the book on how to write books. It might be illustrative to think how the tallies on what punishments have been gotten appear on cards instructed to be written on the book. This meta-book would be nice in that it would be the same book for entering the chinese room or the german room (or really any room). However "slavish procedures" can not relevantly be outrun. You could/would follow the book "slavishly and mindlessly"