SaidAchmiz comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (184)
I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.For example, what if I told you that in reality, I don't speak a word of English ? Whom are you going to believe -- me, or your lying eyes ?
I'm willing to go even farther out on a limb here, and claim that all the serious objections that I've seen so far are either incoherent, or presuppose some form of dualism -- which is likewise incoherent. They all boil down to saying, "No matter how closely a machine resembles a human, it will never be truly human, because true humans have souls/qualia/consciousness/etc. We have no good way of ever detecting these things or even fully defining what they are, but come on, whom are you gonna believe ? Me, or your lying eyes ?"
http://www.youtube.com/watch?v=dd0tTl0nxU0
I remember hearing the story of a mathematical paper published in English but written by a Frenchmen, containing the footnotes:
1 I am grateful to professor Littlewood for helping me translate this paper into English.2
2 I am grateful to professor Littlewood for helping me translate this footnote into English.3
3 I am grateful to professor Littlewood for helping me translate this footnote into English.
Why was no fourth footnote necessary?
So... the answer is... if I told you I don't speak any English, you'd believe me ? Not sure what your point is here.
Well, I posted the link mostly as a joke, but we can take a serious lesson from it: yes, maybe I would believe you; it would depend. If you told me "I don't speak English", but then showed no sign of understanding any questioning in English, nor ever showed any further ability to speak it... then... yeah, I'd lean in the direction of believing you.
Of course if you tell me "I don't speak English" in the middle of an in-depth philosophical discussion, carried on in English, then no.
But a sufficiently carefully constructed agent could memorize a whole lot of sentences. Anyway, this is getting into GAZP vs. GLUT territory, and that's being covered elsewhere in the thread.
There are already quite a few comments on this post -- do you have a link to the thread in question ?
http://lesswrong.com/lw/hgl/theflawedturingtestlanguageunderstandingand/90rl