Following the recent hype over the potential of a machine passing of the Turing test, Adam Ford interviews Stuart Armstrong (me) of the FHI about the meaning of the test, how we can expect a future of many upcoming "Turing test passings" according to varying criteria of strictness, and how and why we test for intelligence in the first place.

I predict that we are entering an era where "X passed the Turing test" will be a more and more common announcement, followed by long discussions as to whether that was a true pass or not.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:55 PM

Can we, as rational individuals, get over the Turing Test? The first computer program to pass it was "ELIZA", written in 1968, although the test was conducted informally and contested by some - the Loebner Prize is awarded every year to the computer that best "passes" the poorly-defined test, and increasing familiarity of people with computers and computers' limitations would make the test harder to pass even year even if everything else remained constant (it hasn't - the topics allowed have expanded, and the length of the conversations has increased, since the contest started. Nonetheless, human beings have reliably been fooled about whether or not they are conversing with a computer or a human being since the late 60's.

It doesn't test what it purports to test; at best it tests the humans conducting it, who often fail to even correctly identify human beings on the other end of the console. It is also a -terrible- test for intelligence in an AI, since it tests the ability of the AI to lie about being human, rather than its ability to think. (Quick, what's 2^11 / 5^5 rounded to the nearest thousandth? The computers in the room have just been revealed, not for their inability to work at a human's level, but by human inability to work at a computer's level.)