The chatterbot "Eugene Goostman" has apparently passed the Turing test:
No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.
But ''Eugene Goostman'', a computer programme developed to simulate a 13-year-old boy, managed to convince 33 per cent of the judges that it was human, the university said.
As I kind of predicted, the program passed the Turing test, but does not seem to have any trace of general intelligence. Is this a kind of weak p-zombie?
EDIT: The fact it was a publicity stunt, the fact that the judges were pretty terrible, does not change the fact that Turing's criteria were met. We now know that these criteria were insufficient, but that's because machines like this were able to meet them.
Let's discuss a new type of Reverse Turing Test.
This simply consists of coming up with a general class of question that you think would reliably distinguish between a chatbot and a human within about 5 minutes of conversation, and explaining which feature of "intelligence" this class of question probes.
If you're not able to formulate the broad requirements for such a class of question, you have no business being the judge in a Turing Test. You're only playing the chatbot as you would play a video game.
One of my candidates for questions of this kind: ask the interviewee to explain a common error of reasoning that people make, or can make. For instance: "If you look at the numbers, there's quite a correlation between sales of ice cream in coastal locations and number of drownings. Some people might be tempted to conclude that ice cream causes people to drown. Do you think that's right, and if not, why not?"
For another example, Dennett discusses having the chatbot explain a joke.
ETA: Scott Aaronson passes with flying colors. Chatbots are likely to lack basic encyclopedic knowledge about the world which every human possesses. (To some extent things like the Wolfram platform could overcome this for precise questions such as Scott's first - but that still leaves variants like "what's more dangerous, a tiger or an edible plant" that are vague enough that quantitative answers probably won't be accessible to a chatbot.)
I quite recommend The Most Human Human by Brian Christian, where he participates in a TT as one of the decoys, and puts a lot of thought into how to steer the conversations to give himself the distinction of being the human most frequently correctly identified as human.