AspiringRationalist comments on AI prediction case study 3: Searle's Chinese room - Less Wrong

7 Post author: Stuart_Armstrong 13 March 2013 12:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: AspiringRationalist 15 March 2013 04:02:57AM 0 points [-]

The Chinese room argument is wrong because it fails to account for emergence. A system can possess properties that its components don't; for example, my brain is made of neurons that don't understand English, but that doesn't mean my brain as a while doesn't. The same argument could applied to the Chinese room.

The broader failure is assuming that things that apply to one level of abstraction apply to another.

Comment author: TheAncientGeek 12 April 2015 08:12:25PM 0 points [-]

A system can possess properties that its components don't; 

But a computational system can't be mysteriously emergent. Your response is equivalent to saying that senantics is constructed, reductionistically out of syntax. How?