NedBlock comments on The two insights of materialism - Less Wrong

18 Post author: Academian 24 March 2010 02:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (132)

You are viewing a single comment's thread. Show more comments above.

Comment author: NedBlock 27 March 2010 03:09:54AM 2 points [-]

There is an aspect of the construction that you are not quite taking in. The programmers give a response to EVERY sequence of letters and spaces that a judge COULD type in the remaining segment of the original hour. One or more of those sequences will be a description of a laser, another will be a description of some similar device that goes counter to physical law, etc. The programmers are supposed to respond to each string as an intelligent person would respond. Here is the relevant part of the description: "Suppose the interrogator goes first, typing in one of A1...An. The programmers produce one sensible response to each of these sentences, B1...Bn. For each of B1...Bn, the interrogator can make various replies [every possible reply of all lengths up to the remaining time], so many branches will sprout below each of the Bi. Again, for each of these replies, the programmers produce one sensible response, and so on." The general point is that there is no need for the programmers to "think of" every theory: that is accomplished by exhaustion. Of course the machine is impossible but that is OK because the point is a conceptual one: having the capacity to respond intelligently for any stipulated finite period (as in the Turing Test) is not conceptually sufficient for genuine intelligence.

Comment author: Morendil 27 March 2010 08:49:12AM *  0 points [-]

there is no need for the programmers to "think of" every theory: that is accomplished by exhaustion

That is plainly wrong. The "input' space (possible judge queries) is exhaustively covered, I'm getting that just fine. No such thing can be said about the "output" space: we're requiring that the output consist of strings encoding responses that an intelligent person would emit. The judge is allowed to say random, possibly wrong, things, but the GLUT is not so allowed.

Consider an input string which consists of a correct explanation of quantum mechanics (which we assume the builders don't know yet at build time), plus a question to the GLUT about what happens in a novel, never before encountered (by the GLUT) experimental setup. This input string is possible, and so must be considered by the builders (along with input strings that are incorrect explanations of QM plus questions about TV shows, but we needn't concern ourselves with those, an actual "judge from the builder's future" will not emit them).

In order to construct even one sensible response to this input string, to respond "as an intelligent person would", the GLUT builders must correctly predict the experimental result. An incorrect response will signal to the "judge" that the GLUT is responding by rote, without understanding. If the GLUT equivocates with "I don't know", the judge will press for an answer; we are assuming that the GLUT has answered all previous queries sensibly up to this point, that it has been a "good student" of QM. If the GLUT keeps dodging the judge's request for a prediction, the game is up: the jduge will flunk it on the Turing Test.

To correctly predict an experimental result, the builders must know and understand QM, but we have assumed they don't. Assuming that the GLUT always passes the Turing Test leads us to a contradiction, so we must allow that there are some Turing Tests the GLUT is unable to pass: those that require it to learn something its builders didn't know. The GLUT does not have the capacity you are claiming for it.

(If you disagree, and think I'm still not getting it, please kindly answer the following: considering only a single input string QM+NE - explanation of quantum mechanics plus novel experiment - how do you propose that a builder who doesn't understand QM construct a sensible answer to that input string?)

Comment author: AdeleneDawner 27 March 2010 09:12:18AM 0 points [-]

You're assuming that the GLUT is simulating a person of average intelligence, right? So they ask a person of average intelligence how they'd respond to that particular sentence, given various kinds of context, and program in the answer(s).

What you're trying to get at, I think, is a situation for which the GLUT has no response, but that's already ruled out by the fact that the hypothetical situation specifies that the programmers have to have systematically considered every possible situation and programmed in a response to it. (It doesn't have to be a good response, just how a person of average intelligence would respond, so variations on 'I don't know' or 'that doesn't make sense to me' would be not just acceptable but actually correct in some situations.)

Comment author: Morendil 27 March 2010 09:44:57AM 1 point [-]

You're assuming that the GLUT is simulating a person of average intelligence, right?

Heh. I'd claim that your use of "average" here is smuggling in precisely the kind of connotation that are relied on to make the GLUT concept plausible, but which do not stand up to scrutiny.

Let's say I'm assuming the GLUT is simulating an intelligence "equivalent" to mine. And assume the GLUT builder is me, ten years ago, when I didn't know about Brehme diagrams but was otherwise relatively smart. Assume the input string is the first few chapters of the Shadowitz text on special relativity I have recently gone through. Under these assumptions, "equivalent" intelligence consists of being able to answer the exercises as correctly as I recently did.

(Crucially, if the supposed-to-be-equivalent-to-mine intelligence turns out to be for some reason cornered into saying "I don't know" or "I can't make sense of this text", I can tell for sure it's not as smart as I am, and we have a contradiction.)

The GLUT intuition pump requires that the me-of-today can "teach" the me-of-ten-years-ago how to use Brehme diagrams, to the point where the me-of-ten-years ago can correctly answer the kind of questions about time dilation that I can answer today.

We're led to concluding one of the following:

  • that I can send information backwards in time
  • that the me-of-ten-years-ago did know about SR, contrary to stipulation
  • that the builders have another way of computing sensible answers, contrary to stipulation
  • that the "intelligence" exhibited by GLUT is restricted to making passable conversational answers but is limited in not being able to acquire new knowledge

My hunch is that this last is really what the fuzziness of the word "intelligence" allows someone thinking about GLUTs to get away with, and not realize it. The GLUT is a smarter ELIZA, but if we try to give it a specific, operational, predictive kind of intelligence of which humans are demonstrably capable, it is easily exposed as a dummy.

Comment author: AdeleneDawner 27 March 2010 09:53:12AM *  0 points [-]

In the course of building the GLUT, you-of-10-years-ago would have to, in the course of going through every possible input that the GLUT might need to respond to, encounter the first few chapters of the book in question, and figure out a correct response to that particular input string. So you-of-10-years-ago would have to know about SR, not necessarily at the start of the project, but definitely by the end of it. (And the GLUT simulating you-of-10-years-ago would be able to simulate the responses that you-of-10-years-ago generated in the learning process, assuming that you-of-10-years-ago put them in as generated rather than programming the GLUT to react as if it already knew about SR.)

Going through every possible random string is an extremely inefficient way to gain new information, though.

Comment author: Morendil 27 March 2010 10:16:34AM 0 points [-]

So you-of-10-years-ago would have to know about SR,

So you agree with me: since there is nothing special about either the 10-year stipulation or about the theory in question, we're requiring the GLUT builders to have discovered and understood every physical theory that will ever be discovered and can be taught to a person of my intelligence.

This is conceptually an even taller order than the already hard to swallow "impossible-but-conceptually-conceivable" machine. Where are they supposed to get the information from? This is - so we are led to conclude - a civilization which can take a stroll through the Library of Babel and pick out just those books which correspond to a sensible physical theory.

Comment author: AdeleneDawner 27 March 2010 10:37:24AM 1 point [-]

I think you misunderstood. You-of-10-years-ago doesn't have to have figured out SR prior to building the GLUT; you-of-10-years-ago would learn about SR - and an unimaginable number of other things, many of them wrong - in the course of programming the GLUT. That's implied in 'going through every possible input'. Also, you-of-10-years-ago wouldn't have to program the objectively-right answers into the GLUT, just their own responses to the various inputs, so no external data source is necessary.

Comment author: pengvado 27 March 2010 10:43:23AM 0 points [-]

The GLUT builder has to understand the given theory, and derive its implications to the novel experiment. But they don't have to know that the theory is correct. It is your later input of a correct explanation that picks the correct answer out of all the wrong ones, and the GLUT builder doesn't have to care which is which.

Comment author: Morendil 27 March 2010 10:52:05AM 0 points [-]

It is your later input of a correct explanation that picks the correct answer out of all the wrong ones

I don't get what you mean here. Please clarify?

Comment author: AdeleneDawner 27 March 2010 11:10:02AM 0 points [-]

If the tester gives the GLUT a plausible-sounding explanation of some event that is incorrect, but that you-of-10-years-ago would be deceived by, the GLUT simulation of you should respond as if deceived. Similarly, if the tester gives the GLUT an incorrect but plausible-sounding explanation of SR that you-of-10-years-ago would take as correct, the GLUT should respond as if it thinks the explanation is correct. You-of-10-years-ago would need to program both sets of responses - thinking that the incorrect explanation of SR is correct, and thinking that the correct explanation of SL is correct - into the GLUT. You-of-10-years-ago would not need to know which of those two explanations of SR was actually correct in order to program thinking-that-they-are-correct responses into the GLUT.

Comment author: Morendil 27 March 2010 11:55:44AM 0 points [-]

I do not accept that a me-of-10-years ago could convincingly simulate these responses after forcing himself to learn every possible variation on the Shadowitz book and sincerely accepting that as true information. Conversely, if he started with the "true" Shadowitz he would have a hard time erasing that knowledge afterwards to give convincing answers to the "false" versions.

Not only would the me-of-10-years ago not be able to convincingly reproduce, e.g. the excitement of learning new stuff and finding that it works; that me would (I suspect) simply go mad under such bizarre circumstances! This is not how learning works in an intelligent mind stipulated as "equivalent" to mine.