Eliezer_Yudkowsky comments on How sure are you that brain emulations would be conscious? - Less Wrong

15 Post author: ChrisHallquist 26 August 2013 06:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 25 August 2013 07:38:44PM 4 points [-]

Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.

Comment author: wedrifid 26 August 2013 04:49:00AM *  6 points [-]

and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.

This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since 'consciousness' seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.

Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven't spent all that much time analysing the implications of aspects of consciousness there could well be something I'm missing.)

Comment author: Eliezer_Yudkowsky 26 August 2013 07:29:04AM 6 points [-]

Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.

Comment author: wedrifid 26 August 2013 08:52:14AM 2 points [-]

Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.

I'd tend to agree, at least with respect to novel or interesting work.

If you'll pardon some academic cynicism, it wouldn't surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.

Comment author: mwengler 28 August 2013 06:40:18PM 0 points [-]

In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious

How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn't conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as "odd" as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.

At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about "art" when loaded with info to write a book on art.

Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that "they are like me, I am conscious deep down, Occam's razor suggests they are too." Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.

Comment author: wedrifid 01 September 2013 05:42:17AM 0 points [-]

How would you know, or even what would make you think, that it was NOT conscious?

I'd examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.

Comment author: Juno_Watt 25 August 2013 07:46:16PM 1 point [-]

Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?

Comment author: Eliezer_Yudkowsky 25 August 2013 08:46:14PM 17 points [-]

Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).

We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.

Comment author: Juno_Watt 25 August 2013 10:49:30PM *  1 point [-]

Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause

For some value of "cause". If you are interested in which synaptic signals cause which reports, then you have guaranteed that the cause will be the same. However, I think what we are interested in is whether reports of experience and self-awareness are caused by experience and self-awareness

We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some

However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.

Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there will only be minor differences at that level, Since you don't care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won't be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!

You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can't report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!

Comment author: Eliezer_Yudkowsky 25 August 2013 11:09:50PM 5 points [-]
Comment author: Juno_Watt 25 August 2013 11:23:22PM 5 points [-]

The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.

Comment author: nshepperd 26 August 2013 12:21:23AM 10 points [-]

The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.

A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn't been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.

Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the "consciousness" bit in every synapse in his brain.

Comment author: Juno_Watt 26 August 2013 01:02:46AM *  -2 points [-]

The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.

A functional duplicate will talk the same way as whomever it is a duplicate of.

A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),

A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the "reason" or, ultimate cause, could be quite different, since the WBE is physically different.

since it hasn't been deliberately programmed to fake consciousness-talk.

It has been programmed to be a functional duplicate of a specific individual.,

Or, something extremely unlikely has happened.

Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.

Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk,

Actually it isn't, for reasons that are widely misunderstood: kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.

Comment author: Eliezer_Yudkowsky 26 August 2013 01:03:57AM 3 points [-]
Comment author: Juno_Watt 26 August 2013 01:24:29AM -1 points [-]

Why? That doesn't argue any point relevant to this discussion.

Comment author: ESRogs 26 August 2013 03:54:24AM 4 points [-]

Did you read all the way to the dialogue containing this hypothetical?

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

The following discussion seems very relevant indeed.

Comment author: Juno_Watt 26 August 2013 01:07:58PM 1 point [-]

I don't see anything very new here.

Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."

Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an important sense "the same person" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?"

How does Albert know that Charles;s consciousness hasn't changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won't report the change because of the functional equivalence of the change.

Charles: "Introspection isn't perfect. Lots of stuff goes on inside my brain that I don't notice."

If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change. "Introspection" is being used ambiguously here, between what is noticed and what is reported.

Albert: "Yeah, and I can detect the switch flipping! You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you'll talk just the same way afterward."

Albert's comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like "I see red". Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,

Comment author: mwengler 28 August 2013 06:46:09PM 1 point [-]

The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs

Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.

Comment author: Juno_Watt 08 September 2013 10:20:08AM 0 points [-]

1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().

2) It's not practical at the monent, and wouldn't answer the theoretical questions.

Comment author: MugaSofer 26 August 2013 05:48:19PM -1 points [-]

We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc.

Hmm. I would expect a difference, but ... out of interest, how much talk about consciousness do you think is directly caused by it (ie non-chat-bot-simulable.)

Comment author: Juno_Watt 25 August 2013 08:19:45PM *  0 points [-]

This comment:

EY to Kawoomba:

This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.

Appeals to contradict this comment:

EY to Juno_Watt

Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.