Kawoomba comments on S.E.A.R.L.E's COBOL room - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (33)
I used not to take Searle's arguments seriously until I actually understood what they were about.
Before anything, I should say that I disagree with Searle's arguments. However, it is important to understand them if we are to have a rational discussion.
Most importantly, Searle does not claim that machines can never understand, or that there is something inherently special about the human brain that cannot be replicated in a computer. He acknowledges that the human brain is governed by physics and is probably subject to the church-turing thesis.
Searle's main argument is this: sophistication of computation does not by itself lead to understanding. That is, just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well. It is very hard to argue against this, which is why the Chinese room argument has stuck around for so long.
Searle is of the opinion that if we can find the 'mechanism' of understanding in the brain and replicate it in the computer, the computer can understand as well.
To get down to the nuts and bolts of the argument, he maintains that a precise molecular-level simulation of a human brain would be able to understand, but a computer that just happened to act intelligent might not be able to.
In my opinion, this argument just hides yet another form of vitalism. That there is something above and beyond the mechanical. However, everywhere in the brain we've looked, we've found just neurons doing simple computations on their inputs. I believe that that is all there is to it - that something with the capabilities of the human brain also has the ability to understand.
However, this is just a belief at this point. There is no way to prove it. There probably will be no way until we can figure out what consciousness is.
So there you have it. The chinese room argument is really just another form of the Hard Problem of consciousness. Nothing new to see here.
Thanks for this!
One thing to keep in mind is that there is no obvious evolutionary advantage to also having some form of "understanding" other than functional capabilities. Why would we have been selected for "understanding", "aboutness", if these were a mechanism separate from just performing the task needed?
Without such an evolutionary selection pressure, how come that our capable brains also evolved into be able to "understand" and "be about something" (if these were not necessary by-products), why didn't we just become Chinese Rooms? To me the most parsimonious explanation seems to be that these capabilities go hand in hand with our functional capacity.
I hope my above point was cogently formulated, I'm forced into watching Chip and Dale right next to this window ...