Nick_Tarleton comments on The two insights of materialism - Less Wrong

18 Post author: Academian 24 March 2010 02:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (132)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 25 March 2010 05:42:31PM 0 points [-]

As far as I can tell from looking at those links both Searle and Pearce would deny the possibility of simulating a person with a conventional computer. I understand that position and while I think it is probably wrong it is not obviously wrong and it could turn out to be true. It seems that this is also Penrose's position.

From the Chinese Room Wikipedia entry for example:

Searle accuses strong AI of dualism, the idea that the mind and the body are made up of different "substances". He writes that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter." He rejects any form of dualism, writing that "brains cause minds" and that "actual human mental phenomena [are] dependent on actual physical-chemical properties of actual human brains", a position called "biological naturalism" (as opposed to alternatives like behaviourism, functionalism, identity theory and dualism).

From the Pearce link you gave:

Secondly, why is it that, say, an ant colony or the population of China or (I'd argue) a digital computer - with its classical serial architecture and "von Neumann bottleneck" - don't support a unitary consciousness beyond the aggregate consciousness of its individual constituents, whereas a hundred billion (apparently) discrete but functionally interconnected nerve cells of a waking/dreaming vertebrate CNS can generate a unitary experiential field? I'd argue that it's the functionally unique valence properties of the carbon atom that generate the macromolecular structures needed for unitary conscious mind from the primordial quantum minddust.

So I still wonder whether anyone actually believes that you could simulate a human mind with a computer but that it would not be conscious.

Comment author: Nick_Tarleton 25 March 2010 06:08:20PM *  1 point [-]

Basically, what bogus said.

I'm confused about what you mean by "simulating a person". Presumably you don't mean simulating in a way that is conscious/has mental states (since that would make the claim under discussion trivially, uninterestingly inconsistent), so presumably you do mean just simulating the physics/neurology and producing the same behavior. While AFAIK neither explicitly says so in the links, Searle and Pearce both seem to me to believe the latter is possible. (Searle in particular has never, AFAIK, denied that an unconscious Chinese Room would be possible in principle; and by "strong AI" Searle means the possibility of AI with an 'actual mind'/mental states/consciousness, not just generally intelligent behavior.)

Comment author: mattnewport 25 March 2010 06:19:12PM 1 point [-]

so presumably you do mean just simulating the physics/neurology and producing the same behavior.

Yes. Equivalently, is uploading possible with conventional computers?

It seems to me that both Searle and Pearce would answer no to both questions. Pearce in particular seems to be saying that consciousness depends on quantum properties of brains that cannot be simulated by a conventional computer. It appears to me that this is equivalent to a claim that physics is not computable but I'm not totally confident of that equivalence. I have trouble reading any other conclusion from anything in those links. Can you point to a quote that makes you think otherwise?

Comment author: Nick_Tarleton 26 March 2010 01:34:36AM 1 point [-]

It appears to me that this is equivalent to a claim that physics is not computable but I'm not totally confident of that equivalence.

I don't think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them. We already know of philosophers who explicitly endorse the possibility of zombies, so it's not surprising for philosophers to endorse positions that imply the possibility of zombies.

Can you point to a quote that makes you think otherwise?

Afraid not, but I think if they thought physics were uncomputable (in the behavioral-simulation sense) they would say so more explicitly.

Comment author: mattnewport 26 March 2010 04:41:41AM 0 points [-]

I don't think Pearce or Searle would agree with this, and it sounds like you might be projecting your belief onto them.

Way back at the beginning of this thread I was trying to establish whether anybody who calls themselves a materialist actually believes the statement "you can't fully simulate a person without the simulation being conscious" to be false. I still don't feel I have an answer to that question. It seems that bogus might believe that statement to be false but he is frustratingly evasive when it comes to answering any direct questions about what he actually believes. It seems we are not currently in a position to say definitively what Pearce or Searle believe.

The only reason I asked in the first place is that I've tended to assume someone who self-describes as a materialist would also believe that statement to be true. I guess the moral of this thread is that I can't assume that and should ask if I want to know.

Comment author: Mitchell_Porter 26 March 2010 04:57:59AM 2 points [-]

Many people want to draw the line at lookup tables - they don't believe simulation by lookup table would be conscious.

Comment author: Morendil 26 March 2010 08:12:55AM *  1 point [-]

A huge look-up table could always "in principle" provide the innards governing any behavioral regularities whatever, and intuition proclaims that we would not consider anything controlled by such a mere look-up table to have psychological states. (If I discovered that you were in fact controlled by such a giant look-up table, I would conclude that you were not a person at all, but an elaborate phony.) But as Alan Turing recognized when he proposed his notoriously behavioristic imitation game, the Turing Test, this "in principle" possibility is not really a possibility at all. A look-up table larger than the visible universe, accessed at speeds trillions of times in excess of the speed of light, is not a serious possibility, and nothing less than that would suffice. What Turing realized is that for real time responsivity in an unrestricted Turing Test, there is only one seriously conceivable architecture: one that creates its responses locally, on the fly, by processes that systematically uncover the meaning of the inputs, given its previous history, etc., etc

-- Daniel Dennett (from here)

The point being that GLUTs are faulty intuition pumps, so we cannot use them to bolster our intuition that "something mechanical that passed the Turing Test might nevertheless not be conscious".

It would take a GLUT as large as the universe just to store all possible replies to questions I might ask of it, but it would flounder on a simple test: if I were to repeat the same question several times, it would give me the same answer each time. You could push me into a less convenient possible world by arguing that the GLUT responds to minute differences in my tone of voice, etc. - but I could also record myself on tape and play the same tape back N times, and the GLUT would expose itself as such, and therefore fail the test, by sphexishly reciting back its stored lines.

There's no way that I can see of going around this, other than to "extend" the GLUT concept to allow for stored states and conditional branches, at which point we recover Turing completeness. To a programmer, the GLUT concept just isn't credible.

Comment author: Risto_Saarelma 26 March 2010 09:35:35AM 4 points [-]

Ok, basic confusion here. The GLUT obviously has to be indexed on conversation histories up to the point of the reply, not just the last statement from the interlocutor. Having it only index using the last statement would make it pretty trivially incapable of passing a good Turing test. It follows that since it's still assumed to be a finite table, it can only do conversations up to a given length, say half an hour. Half an hour, on the other hand, should be quite long enough to pass a Turing test, and since we're dealing with crazy scales here, we might just as well make the maximum length of conversation 80 years or something.

Comment author: Morendil 26 March 2010 02:24:35PM 3 points [-]

Ok, basic confusion here. ...obviously...

Tut, tut. Assuming the confusion you claim to see is mine: you don't get to tell me that my objection to an intuition pump is incoherent, you are required to show that it is incoherent, and it is preferable to avoid lullaby language in such argumentation.

Yes, the question "what is your index" exposes the GLUT as a confused intuition pump. I am at present looking at the Ned Block (1981) paper Psychologism and Behaviorism which (as best I could ascertain) is the original source for the GLUT concept. It makes a similar claim to yours, namely that "for a Turing Test of any given length, the machine could in principle be programmed in just the same way to pass a Turing Test of that length".

But sauce for the goose is sauce for the gander: for a GLUT of any size, there is a Turing Test of sufficient duration that exposes the GLUT as not conscious, by looping back to the start of the conversation! This shows that the argument from a necessarily finite index does have force to counter the GLUT as an intuition pump.

It is flawed in other ways. You can't blame Ned Block who at the time of writing that paper can't have spent a lot of time on IRC, but someone with that experience would tell you that indexing on character strings wouldn't be enough to pass a 1-hour Turing test: the GLUT as originally specified would be vulnerable to timing attacks. It wouldn't be able to spontaneously say something like "You haven't typed anything back to me for thirty minutes, what's wrong?"

"OK", a GLUT advocate might reply, "we can in principle include timings in the index, to whatever timing resolution you are capable of detecting".

It's tempting to grant this "in principle" counter-objection, especially as I don't have the patience to go to the literature and verify that the "timing attack" objection hasn't been raised and countered before.

But the fact that the timing attack wasn't anticipated by Ned Block is precisely what shows up the GLUT concept as a faulty intuition pump. You don't get to "go back to the drawing board" on the GLUT concept each time an attack is found and iteratively improve it until its index has been generalized enough to cover all possible circumstances: that is tantamount to having an actual, live, intelligent human sit behind the keyboard and respond.

Comment author: NedBlock 26 March 2010 09:56:47PM 2 points [-]

Actually the whole idea of the GLUT machine (dubbed the 'blockhead' in Braddon-Mitchell's and Jackson's book, The Philosophy of Mind and Cognition) IS precisely to use live intelligent humans to store an intelligent response to every response a judge might make under a pre-specified limit (including silence and looping, which is discussed explicitly in the paper). The idea is to show that even though the resulting machine has the capacity to emit an intelligent response to any comment within the finite specified limits, it nonetheless has the intelligence of a juke-box. The point is that the intelligent programmers anticipate anything that the "judge" could say in the finite span. The upshot is that the capacity of a machine to pass a Turing Test of a finite length does not entail actual intelligence.

Comment author: Morendil 26 March 2010 02:41:56PM *  0 points [-]

On a different level of objection, I for one would bite the functionalist bullet: something that could talk to me regularly for 80 years, sensibly, who could actually teach me things or occasionally delight me, all the while insisting that it wasn't in fact conscious but merely a GLUT simulating my Aunt Bertha...

Well, I would call that thing conscious in spite of itself.

To simulate Aunt Bertha effectively, and to keep that up for 80 years, it would in all likelihood have to be encoded with Aunt Bertha's memories, Aunt Bertha's wonderful quirks of personality, Aunt Bertha's concerns for my little domestic worries as I gradually moved through my own narative arc in life, Aunt Bertha's nuggets of wisdom that I would sometimes find deep as the ocean and other times silly relics of a different age, and so on and so forth.

The only difference with Aunt Bertha would be that, when I asked her (not "it") why she thought she answered as she does, she'd tell me, "You know, dear nephew, I don't want to deceive you, for all that I love you: I'm not really your Aunt Bertha, I'm just a GLUT programmed to act like her. But don't fret, dear. You're just an incredibly lucky boy who got handed the jackpot when drawing from the infinite jar of GLUTs. Isn't that nice? Now, about your youngest's allergies..."

Comment author: Risto_Saarelma 26 March 2010 03:09:13PM 0 points [-]

Wasn't an objection to these kinds of GLUTs that you'd basically have to make them by running countless actual, conscious copies of Aunt Bertha and record their incremental responses to each possible conversation chain? So you would be in a sense talking with a real, conscious human, although they might be long dead when you start indexing the table.

Though since each path is just a recording of a live person, it wouldn't agree with being a GLUT unless the Aunt Bertha copies used to build the table would have been briefed earlier about just why they are being locked in a featureless white room and compelled to have conversation with the synthetic voice speaking mostly nonsense syllables at them from the ceiling.

(We can do the "the numbers are already ridiculous, so what the hell" maneuver again here, and replace strings of conversation with the histories of total sensory input Aunt Bertha's mind can have received at each possible point in her life at a reasonable level of digitization, map these to a set of neurochemical outputs to her muscles and other outside-world affecting bits, and get a simulacrum we can put in a body with similar sensory capabilities and have it walking around, probably quite indistinguishable from the genuine, Turing-complete article. Although this would involve putting the considerably larger number of Bertha-copies used to build the GLUT into somewhat more unpleasant situations than being forced to listen to gibberish for ages.)

Comment author: Nick_Tarleton 26 March 2010 03:02:23PM *  0 points [-]

Surely there are multiple possible conscious experiences that could be had by non-GLUT entities with Aunt Bertha's behavior. How would you decide which one to ascribe to the GLUT?

Comment author: JGWeissman 26 March 2010 05:10:19AM 0 points [-]

The lookup tables are not conscious but the process that produced them was.

Comment author: Mitchell_Porter 26 March 2010 05:23:45AM 3 points [-]

What about a randomly generated lookup table that just happens to simulate a person? (They can be found here.)

Comment author: JGWeissman 26 March 2010 05:33:26AM 2 points [-]

That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?

"No, no!" says the philosopher. "In the thought experiment, they aren't randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain's inputs and outputs! There! I've got you cornered now! You can't play Follow-The-Improbability any further!"

Oh. So your specification is the source of the improbability here.

When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.

Comment author: Mitchell_Porter 26 March 2010 05:42:23AM 0 points [-]

Oh. So your specification is the source of the improbability here.

My specification is the reason we are talking about something improbable. It's not the cause of the improbable thing itself.

Comment author: Nick_Tarleton 26 March 2010 05:56:30AM *  0 points [-]

The only reason I asked in the first place is that I've tended to assume someone who self-describes as a materialist would also believe that statement to be true.

I think your prior estimate for other people's philosophical competence and/or similarity to you is way too high.

Comment author: bogus 25 March 2010 06:26:53PM 0 points [-]

quantum properties of brains that cannot be simulated by a conventional computer.

To the best of our knowledge, any "quantum property" can be simulated by a classical computer with approx. exponential slowdown. Obviously, a classical computer is not going to instantiate these quantum properties.

Comment author: mattnewport 25 March 2010 06:30:28PM 2 points [-]

Obviously, a classical computer is not going to instantiate these quantum properties.

Is that obvious?

Comment author: bogus 25 March 2010 06:50:58PM *  -1 points [-]

It should be. We can definitely build classical computers where quantum effects are negligible.

(For all we know, the individual transistors of these computers might have some subjective experience; but the computer as a whole won't.)

Comment author: mattnewport 25 March 2010 06:59:52PM 1 point [-]

If the Church-Turing-Deutsch thesis is true and some kind of Digital Physics is an accurate depiction of reality then a simulation of physics should be indistinguishable from 'actual' physics. Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.

Comment author: bogus 25 March 2010 07:11:57PM *  -1 points [-]

Saying subjective experience would not exist in the simulation under such circumstances would be a particularly bizarre form of dualism.

The same formal structure will exist, but it will be wholly unrelated to what we mean by "subjective experience". What's dualistic about this claim?