Stuart_Armstrong comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 17 May 2013 05:24:27PM *  1 point [-]

The problem with this line of reasoning is that the Turing test is very open-ended. You have no idea what a bunch of humans will want to talk to your machine about. Maybe about God, maybe about love, maybe about remembering your first big bloody scrape as a kid... Maybe your machine will get some moral puzzles, maybe logical paradoxes, maybe some nonsense.

This was more of a challenge before the web, with its trillions of lines of text on all subjects. Because of this, I don't consider the text based test as that good anymore - a true open ended test would need to deviate from this text-based format nowadays.

Comment author: Bugmaster 17 May 2013 09:04:21PM 1 point [-]

a true open ended test would need to deviate from this text-based format nowadays.

Where does this leave mute humans, or partially paralyzed humans, or any other kind of human who can't verbally speak your language ? If we still classify them as "human", then what reason do you have for rejecting the AI ?

Comment author: VCM 19 May 2013 07:47:03PM 0 points [-]

That's why the test only offers a sufficient condition for intelligence (not a necessary one) - at least that's the standard view.

Comment author: Stuart_Armstrong 18 May 2013 06:32:27PM 0 points [-]

The Turing test retains validity as a general test, on all systems that are not specifically optimised to pass the test.

For instance, the Turing test is good for checking whether whole brain emulations are conscious. Conversation is enough to check that humans are conscious (and if a dog or dolphin managed conversation, it would work as a test for them as well).

Comment author: Bugmaster 18 May 2013 08:52:50PM 0 points [-]

This is a circular argument, IMO. How can you tell whether you're talking to a whole brain emulation, or a bot designed to mimic a whole brain emulation ?

Comment author: Stuart_Armstrong 20 May 2013 09:15:01AM 0 points [-]

By knowing its provenance. Maybe, when we get more sophisticated and knowledgeable about these things, by looking at its code.

In humans, when assessing whether they're lying or not, then knowing the details of their pasts (especially, for instance, knowing if they were trained to lie professionally or not) should affect your assessment of their performance.

Comment author: bogdanb 17 May 2013 11:34:20PM 0 points [-]
Comment author: DSimon 17 May 2013 08:41:10PM *  1 point [-]

But you can keep on adding specifics to a subject until you arrive at something novel. I don't think it would even be that hard: just Google the key phrases of whatever you're about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.