Benito comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread.

Comment author: Benito 20 May 2013 02:03:54PM 1 point [-]

I don't know how useful the Turing Test is. It is (as I understand it) supposed to tell when a computer has become conscious, by comparing its responses to human responses. Yet, only in the case of an Uploaded Mind would we expect the computer to be like a human. In practically every other situation we would've given the computer a varietal of different properties. The possible mind space of conscious beings is vastly larger than the mind space of conscious humans.

Comment author: Osiris 23 May 2013 06:04:21AM -1 points [-]

True, but we are the ones creating the AI. I suspect a programmer that only has access to human thinking would leave their mark upon any such machine.

And, since we WANT something that can relate to us, we must test its capacity for human-like behavior.

An AI that can only relate to intelligent fungi from some far-off star would be absolutely useless to us, and would likely find us equally useless. No common ground would mean no need for contact or commerce. At the risk of sounding a lil' Ferengi, I want a machine intelligence I can do business with.