TheOtherDave comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 18 May 2013 08:51:49PM 2 points [-]

I understand what you're saying, but I don't understand why. I can come up with several different interpretations of your statement:

  • Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.
  • It's possible for a chatterbot to appear generally intelligent without actually being generally intelligent.
  • You and I are talking about radically different things when we say "general intelligence".
  • You and I are talking about radically different things when we say "chatting".

To shed light on these points, here are some questions

  • Do you believe that a non-AGI chatterbot would be able to engage in a conversation with you that is very similar to the one you and I are having now ?
  • Admittedly, I am not all that intelligent and thus not a good test case. Do you believe that a non-AGI chatterbot could be built to emulate you personally, to the point where strangers talking with it on Less Wrong could not tell the difference between it and you ?
Comment author: MugaSofer 20 May 2013 10:29:59AM -1 points [-]

Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.

Actually, this seems surprisingly plausible, thinking about it. A lot of conversations are on something like autopilot.

But eventually even a human will need to think in order to continue.