AspiringRationalist comments on AI prediction case study 3: Searle's Chinese room - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don't; for example, my brain is made of neurons that don't understand English, but that doesn't mean my brain as a while doesn't. The same argument could applied to the Chinese room.
The broader failure is assuming that things that apply to one level of abstraction apply to another.
But a computational system can't be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?