DanArmak comments on Come up with better Turing Tests - Less Wrong

13 Post author: Stuart_Armstrong 10 June 2014 10:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 11 June 2014 08:34:33AM 0 points [-]

the Turing test serves as a pretty good marker of generalizability

That argues any sufficiently general system could pass the Turing test. But maybe it's really impossible to pass the test without investing a lot of 'narrow' resources in that specific goal. Even if an AGI could self-modify to pass for human, it would not bother unless that were an instrumental goal (i.e. to trick humans), at which point it's probably too late for you from a FAI viewpoint.

We should be able to recognize a powerful, smart, general intelligence without requiring that it be good at pretending to be a complete different kind of powerful, smart, general intelligence that has a lot of social quirks and cues.

The Turing test is an excellent benchmark for their performance; I no longer think we can take a pass as evidence of strong general intelligence, but humanlike responses are so useful in these roles that I still think it's a good thing to shoot for.

Again, I don't think the Turing test is necessary in this example. Siri can fulfill every objective of its designers without being able to trick humans who really want to know if it's an AI or not. A robotic hotel concierge wants to make guests comfortable and serve their needs; there is no reason that should involve tricking them.