torekp comments on S.E.A.R.L.E's COBOL room - Less Wrong

29 Post author: Stuart_Armstrong 01 February 2013 08:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 01 February 2013 10:14:58PM 6 points [-]

I used not to take Searle's arguments seriously until I actually understood what they were about.

Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.

Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.

Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.

Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.

To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.

In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.

However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.

So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.

Comment author: torekp 05 February 2013 02:00:20AM 2 points [-]

The chinese room argument is really just another form of the Hard Problem of consciousness.

This is correct and deserves elaboration.

Searle makes clear his agreement with Brentano that intentionality is the hallmark of consciousness. "Intentionality" here means about-ness, i.e. a semantic relation whereby a word (for example) is about an object. For Searle, all consciousness involves intentionality, and all intentionality either directly involves consciousness or derives ultimately from consciousness. But suppose we also smuggle in the assumption - and for English speakers, this will come naturally - that subjective experience is necessarily entwined with "consciousness". In that case we commit to a view we could summarize as "intentionality if and only if subjective experience."

Now let me admit, Searle never explicitly endorses such a statement, as far as I know. I think it has nothing to recommend it, either. But I do think he believes it, because that would explain so much of what he does explicitly say.

Why do I reject "intentionality if and only if subjective experience"? For one thing, there are simple states of consciousness - moods, for example - that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.

Searle's arguments fail to show that AIs in the "computationalist" conception can't think about, and talk about, stuff. But then, that just shows that he picked the wrong target. Intentionality is easy. The real question is qualia.

Comment author: bouilhet 14 September 2013 02:56:03PM 1 point [-]

Why do I reject "intentionality if and only if subjective experience"? For one thing, there are simple states of consciousness - moods, for example - that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.

I think this is a bit confused. It isn't that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X without Y. I'm not familiar enough with Searle to comment on his endorsement of the idea, but it makes sense to me at least that in order to have intention (in the sense of will) an agent would have first to be able to perceive (subjectively, of course) the surroundings/other agents on which it intends to act. You say intentionality is "easy". Okay. But what does it mean to talk of intentionality, without a subject to have the intention?

Comment author: torekp 15 September 2013 04:05:28PM *  0 points [-]

"Intentionality" is an unfortunate word choice here, because it's not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within "intentionality" as Searle uses it, but it's not the only example. Your argument is still plausible and relevant, and I'll try to reply in a moment.

As you suggest, I didn't even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don't think an argument can be made, but mainly because the Less Wrong community doesn't seem to need any convincing, or didn't until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories.

Now to answer your argument. I do think it's conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. "Robbie knows that the blue box fits inside the red one" is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn't have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as "appears to be red" or "seems blue" to Robbie. There is no veil of perception. There is only reality. Perfect engineering has eliminated subjectivity.

This little story seems wildly improbable, but it's not self-contradictory. I think it shows that knowledge and (repeat the story with suitable substitutions) intentional action need not imply subjectivity.