Juno_Watt comments on How sure are you that brain emulations would be conscious? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (174)
What does "like" mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?
Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since 'consciousness' seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.
Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven't spent all that much time analysing the implications of aspects of consciousness there could well be something I'm missing.)
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I'd tend to agree, at least with respect to novel or interesting work.
If you'll pardon some academic cynicism, it wouldn't surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn't conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as "odd" as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about "art" when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that "they are like me, I am conscious deep down, Occam's razor suggests they are too." Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
I'd examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.
Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.
For some value of "cause". If you are interested in which synaptic signals cause which reports, then you have guaranteed that the cause will be the same. However, I think what we are interested in is whether reports of experience and self-awareness are caused by experience and self-awareness
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some
Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there will only be minor differences at that level, Since you don't care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won't be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!
You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can't report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!
http://lesswrong.com/lw/p7/zombies_zombies/
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
http://lesswrong.com/lw/f1u/causal_reference/
More generally http://wiki.lesswrong.com/wiki/Zombies_(sequence)
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn't been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the "consciousness" bit in every synapse in his brain.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the "reason" or, ultimate cause, could be quite different, since the WBE is physically different.
It has been programmed to be a functional duplicate of a specific individual.,
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Actually it isn't, for reasons that are widely misunderstood: kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
Why? That doesn't argue any point relevant to this discussion.
Did you read all the way to the dialogue containing this hypothetical?
The following discussion seems very relevant indeed.
Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().
2) It's not practical at the monent, and wouldn't answer the theoretical questions.
Hmm. I would expect a difference, but ... out of interest, how much talk about consciousness do you think is directly caused by it (ie non-chat-bot-simulable.)
This comment:
EY to Kawoomba:
Appeals to contradict this comment:
EY to Juno_Watt