Vaniver comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 17 May 2013 08:19:06PM 4 points [-]

If you were right, it would be much easier to construct such a "conversation savant" than it has proven to be.

Watson shocked me - I didn't think that type of performance was possible without AI completeness. That was a type of savant that I thought couldn't happen before AGI.

It might be that passing for a standard human in a Turing test is actually impossible without AGI - I'm just saying that I would want more proof in the optimised-for-Turing-test situation than in others.

Comment author: Vaniver 17 May 2013 08:39:49PM 5 points [-]

That was a type of savant that I thought couldn't happen before AGI.

This interests me (as someone professionally involved in the creation of savants, though not linguistic ones). Can you articulate why you thought that?

Comment author: Stuart_Armstrong 18 May 2013 06:19:26PM 1 point [-]

It wasn't formalised thinking. I bought into the idea of AI-complete problems, ie that there were certain problems that only a true AI could solve - and that if it could, it could also solve all others. I was also informally thinking that linguistic ability was the queen of all human skills (influenced by the Turing test itself and by the continuous failure of chatterbots). Finally, I wasn't cognisant of the possibilities of Big Data to solve these narrow problems by (clever) brute force. So I had the image of a true AI being defined by the ability to demonstrate human-like ability on linguistic problems.