Bugmaster comments on The flawed Turing test: language, understanding, and partial p-zombies - Less Wrong

11 Post author: Stuart_Armstrong 17 May 2013 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: SaidAchmiz 20 May 2013 01:31:46AM *  8 points [-]

... I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure.

I think the OP, and many commenters here, might be missing the point of the Turing test (and I can't help but suspect that the cause is not having read Turing's original article describing the idea; if so, I highly recommend remedying that situation).

Turing was not trying to answer the question "is the computer conscious", nor (the way he put it) "can machines think". His goal was to replace that question.

Some representative quotes (from Turing's "Computing Machinery and Intelligence"; note that "the imitation game" was Turing's own term for what came to be called the "Turing test"):

May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection [to the critique that failure of the test may prove nothing] is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.

...

We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question "Can machines think?" ... We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connection.

It will simplify matters for the reader if I first explain my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible to program computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nonetheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Basically, treating the Turing test as if it was ever even intended to give an answer to questions like "does this AI possess subjective consciousness" is rather silly. That was not even close to the intent. If we want to figure out whether something is conscious, or what have you, we'll have to find some other way. The Turing test just won't cut it — nor was it ever meant to.

Comment author: Bugmaster 20 May 2013 01:37:45AM 0 points [-]

As far as I understand, the Turing Test renders questions such as "does X really possess subjective consciousness or is it just pretending" simply irrelevant. Yes, applying the Turing Test in order to find answers to such questions would be silly; but mainly because the questions themselves are silly.

Comment author: SaidAchmiz 20 May 2013 01:49:54AM 2 points [-]

Well, right; hence Turing's dismissal of "do machines really X" as "too meaningless to deserve discussion". If we insist on trying to get an answer to such a question, "Turing-test it harder!" is not the way to go. We should, at the very least, figure out what the heck we're even asking, before trying to shoehorn the Turing test into answering it.

Comment author: [deleted] 20 May 2013 02:31:19AM *  0 points [-]

Why do you think (if you agree with Turing) that the question of whether machines think is too meaningless to diserve discussion?...if that question isn't a bit paradoxical?

Comment author: Bugmaster 20 May 2013 06:06:41AM 1 point [-]

I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.For example, what if I told you that in reality, I don't speak a word of English ? Whom are you going to believe -- me, or your lying eyes ?

I'm willing to go even farther out on a limb here, and claim that all the serious objections that I've seen so far are either incoherent, or presuppose some form of dualism -- which is likewise incoherent. They all boil down to saying, "No matter how closely a machine resembles a human, it will never be truly human, because true humans have souls/qualia/consciousness/etc. We have no good way of ever detecting these things or even fully defining what they are, but come on, whom are you gonna believe ? Me, or your lying eyes ?"

Comment author: [deleted] 20 May 2013 02:37:21PM *  0 points [-]

I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.

What's the reasoning here? This is the sort of thing that seems plausible in many cases but the generality of the claim sets off alarm bells. Is it really true that we never care about the source of a behavior over and above the issue of, say, predicting that behavior?

I'm willing to go even farther out on a limb here, and claim that all the serious objections that I've seen so far are either incoherent, or presuppose some form of dualism -- which is likewise incoherent.

Well, this isn't quite the issue. No one is objecting to the claim that machines can be people (as, I think Dennet aptly said, this would be surprising given that people are machines). Indeed, its out of our deep interest in that possibility that we made this mistake about Turing tests: I for one would like to be forgiven for being blind to the fact that all the Turing test can tell us is whether or not a certain property (defined entirely in terms of the test) holds of a certain system. I had no antecedent interest in that property, after all. What I wanted to know is 'is this machine a person', eagerly/fearfully awaiting the day that the answer is 'yes!'.

You may be right that my question 'is this machine a person' is incoherent in some way. But it's surprising that the Turing test involves such a serious philosophical claim.

Comment author: Bugmaster 20 May 2013 09:45:06PM *  0 points [-]

Is it really true that we never care about the source of a behavior over and above the issue of, say, predicting that behavior?

I wasn't intending to make a claim quite that broad; but now that you mention it, I am going to answer "yes" -- because in the process of attempting to predict the behavior, we will inevitably end up building some model of the agent. This is no different from predicting the behaviors of, say, rocks.

If I see an object whose behavior is entirely consistent with that of a roughly round rock massing about 1kg, I'm going to go ahead and assume that it's a round-ish 1kg rock. In reality, this particular rock may be an alien spaceships in disguise, or in fact all rocks could be alien spaceships in disguise, but I'm not going to jump to that conclusion until I have some damn good reasons to do so.

You may be right that my question 'is this machine a person' is incoherent in some way. But it's surprising that the Turing test involves such a serious philosophical claim.

My point is not that the Turing Test is a serious high-caliber philosophical tool, but rather that the question "is this agent a person" is a lot simpler than philosophers make it out to be.

Comment author: SaidAchmiz 20 May 2013 04:01:11PM 0 points [-]

For example, what if I told you that in reality, I don't speak a word of English ? Whom are you going to believe -- me, or your lying eyes ?

http://www.youtube.com/watch?v=dd0tTl0nxU0

Comment author: DavidS 22 May 2013 09:45:09PM *  10 points [-]

I remember hearing the story of a mathematical paper published in English but written by a Frenchmen, containing the footnotes:

1 I am grateful to professor Littlewood for helping me translate this paper into English.2

2 I am grateful to professor Littlewood for helping me translate this footnote into English.3

3 I am grateful to professor Littlewood for helping me translate this footnote into English.

Why was no fourth footnote necessary?

Comment author: Bugmaster 20 May 2013 09:23:35PM 0 points [-]

So... the answer is... if I told you I don't speak any English, you'd believe me ? Not sure what your point is here.

Comment author: SaidAchmiz 20 May 2013 09:28:44PM 0 points [-]

Well, I posted the link mostly as a joke, but we can take a serious lesson from it: yes, maybe I would believe you; it would depend. If you told me "I don't speak English", but then showed no sign of understanding any questioning in English, nor ever showed any further ability to speak it... then... yeah, I'd lean in the direction of believing you.

Of course if you tell me "I don't speak English" in the middle of an in-depth philosophical discussion, carried on in English, then no.

But a sufficiently carefully constructed agent could memorize a whole lot of sentences. Anyway, this is getting into GAZP vs. GLUT territory, and that's being covered elsewhere in the thread.

Comment author: Bugmaster 20 May 2013 10:17:39PM 0 points [-]

Anyway, this is getting into GAZP vs. GLUT territory, and that's being covered elsewhere in the thread.

There are already quite a few comments on this post -- do you have a link to the thread in question ?

Comment author: SaidAchmiz 20 May 2013 10:19:45PM 1 point [-]
Comment author: SaidAchmiz 20 May 2013 02:43:13AM 0 points [-]

I do agree with Turing, but I'm reluctant to indulge this digression in the current comment thread. My point was that regardless of whether we think that "Can machines think?" is meaningless, Turing certainly thought so, and he did not invent his test with the purpose of answering said question. When we attempt to use the Turing test to determine whether machines think, or are conscious, or any such thing, we're a) ignoring the design intent of the test, and b) using the wrong tool for the job. The Turing test is unlikely to be of any great help in answering such questions.

Comment author: [deleted] 20 May 2013 02:22:23PM 2 points [-]

Consider that point amply made, thanks.