You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Qiaochu_Yuan comments on S.E.A.R.L.E's COBOL room - Less Wrong Discussion

29 Post author: Stuart_Armstrong 01 February 2013 08:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 01 February 2013 10:14:58PM 6 points [-]

I used not to take Searle's arguments seriously until I actually understood what they were about.

Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.

Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.

However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.

So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.

Comment author: Qiaochu_Yuan 01 February 2013 10:39:55PM 7 points [-]

Taboo "understanding."

Comment author: khafra 04 February 2013 12:43:19PM 2 points [-]

The only good taboo of understanding I've ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:

I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.

By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does--ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?

Comment author: TheOtherDave 04 February 2013 01:48:25PM *  5 points [-]

I would say I understand a system to the extent that I'm capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac's version.

you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?

IIRC, the CR as Searle describes it would include rules for responding to the question "What are likely last words that end this sentence?" in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.

And, definitionally, of doing so without understanding.

To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.

Comment author: khafra 04 February 2013 02:25:00PM 2 points [-]

Good point--I was thinking of "figuring out the characteristics" fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn't a fully-functioning Chinese Room.

Comment author: Tyrrell_McAllister 02 February 2013 12:08:09AM 3 points [-]

It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.

For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn't have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer's search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo "subjective experience" will be missing something important.

Furthermore, I can mount this critique of such an attempt without being obliged to taboo "subjective experience" myself. That is, making the critique is valuable even if it doesn't offer an alternative way to taboo "subjective experience".

Comment author: Qiaochu_Yuan 02 February 2013 03:23:51AM 5 points [-]

It's not clear to me that "understanding" means "subjective experience," which is one of several reasons why I think it's reasonable for me to ask that we taboo "understanding."

Comment author: Tyrrell_McAllister 02 February 2013 04:22:38AM 3 points [-]

I didn't mean to suggest that "understanding" means "subjective experience", or to suggest that anyone else was suggesting that.