Bugmaster comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (184)
I think we've hit this milestone already, but we kind of cheated: in addition to just making computers smarter, we made human conversations dumber. Thus, if we wanted to stay true to Turing's original criteria, we'd need to scale up our present-day requirements (say, to something like 80% chance over 60 minutes), in order to keep up with inflation.
I can propose one relatively straightforward criterion: "can this agent take the place of a human on our social network graph ?" By this I don't simply mean, "can we friend it on Facebook"; that is, when I say "social network", I mean "the overall fabric of our society". This network includes relationships such as "friend", "employee", "voter", "possessor of certain rights", etc.
I think this is a pretty good criterion, and I also think that it could be evaluated in purely functional terms. We shouldn't need to read an agent's genetic/computer/quantum/whatever code in order to determine whether it can participate in our society; we can just give it the Turing Test, instead. In a way, we already do this with humans, all the time -- only the test is administered continuously, and sometimes we get the answers wrong.