I believe the word "consciousness" is used in so many confused and conflicting ways that nobody should mention "consciousness" without clarifying what they mean by it. I will substitute your question with "How should we morally value emulations?".
Personally, if an emulation behaved like a human in all respects except for physical presence, I would give them the same respect as I give a human, subject to the following qualifications:
If emulations behave in noticably different ways from humans, I would seek more information before making judgements.
In particular, according to my current moral intuition, I don't give an argument of the form "This emulation behaves like just like a human, but it might not actually be conscious" any weight.
I don't believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives.
Do you in general support regulations on creating things with voting rights, to avoid manufacturers having perverse incentives?
Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of detection) and verified surface correspondence (the person emulated says they can't internally detect any difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about alternative hypotheses because the probability would be low enough to fall off the radar of things I should think about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even though your biological brain isn't?
and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since 'consciousness' seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.
Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven't spent all that much time analysing the implications of aspects of consciousness there could well be something I'm missing.)
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn't been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the "consciousness" bit in every synapse in his brain.
I cannot imagine how moving sodium and potassium ions could lead to consciousness if moving electrons cannot.
In addition, I think consciousness is a gradual process. There is no single point in the development of a human where it suddenly gets conscious, and in the same way, there was no conscious child of two non-conscious parents.
Put me down for having a strong intuition that ems will be conscious. Maybe you know of arguments to the contrary, and I would be interested in reading them if you do, but how could anything the brain does produce consciousness if a functionally equivalent computer emulation of it couldn't. What, do neurons have phlogiston in them or something?
How sure are you that brain emulations would be conscious?
~99%
is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?
Abandon wrong questions. Leave reductionism doubting to people who are trying to publish papers to get tenure, assuming that particular intellectual backwater still has status potential to exploit.
You realize, of course, that that ~1% chance could be very concerning in certain scenarios? (Apologies in advance if the answer is "yes" and the question feels insulting.)
And, alas, approximately all of the remaining uncertainty is in the form of "my entire epistemology could be broken leaving me no ability to model or evaluate any of the related scenarios".
But that is exactly what wedrifid did, only consciously so. He didn't want to expend the cognitive effort to find the value on a finer-grained scale, so he used a scale with granularity 1%. He knew he couldn't assign 100%, so the only value to pick was 99%. This is how we use numbers all the time, except in certain scientific contexts where we have the rules about significant figures, which behave slightly differently.
how sure are you that whole brain emulations would be conscious
Slightly less than I am that you are.
is there anything we can do now to to get clearer on consciousness?
Experiments that won't get approved by ethics committees (suicidal volunteers PM me).
Before I tell my suicidal friends to volunteer, I want to make sure that your experimental design is good. What experiment are you proposing?
The post itself and the comments are pretty much in the nature of discussion; as such, I suggest this post be moved there.
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance.
"Biological" could be taken as a place holder for the idea that there are very specific, but unknown, bits of phsycis and chemistry involved in consciousness. That there are specific but known bits of physics and chemistry involved in some things is unconentious: you can't make a magnet or superconductor out of...
I'm more worried that an upload of me would not be me me than that an upload would not be conscious.
There's weirder still possibilities arising out of some utilitarianisms. Suppose that you count exact copies as distinct people, that is, two copies of you feel twice the pleasure or twice the pain that you feel. Sounds sensible so far. Suppose that you're an EM already, and the copies are essentially flat; the very surface of a big silicon die. You could stack the copies flat one atop the other; they still count as distinct people, but can be gradually made identical to a copy running on computer with thicker wiring and thicker transistors. At which point...
This question requires agreement on a definition of what "consciousness" is. I think many disagreements about "consciousness" would be well served by tabooing the word.
So what is the property that you are unsure WBEs would have? It must be a property that could in principle be measured by an external, objective procedure. "Subjective experience" is just as ill defined as "consciousness".
While waiting for an answer, I will note this:
A successful WBE should exhibit all the externally observable behaviors of a human - ...
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biolo...
Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious.
I guess they have some explanation why, I just can't imagine it.
My best attempt is: The fact that the only known form of consciousness is biological, is an evidence for the hypothesis "consciousness must be biological".
The problem is that it is equally an evidence for hypothesis "consciousness must be human" or "consciousness must be in our Solar system" or even "a conscious being ca...
We know that some complex processes in our own brains happen unaccompanied by qualia. This is uncontroversial. It doesn't seem unlikely to me that all the processes needed to fake perceptual consciousness convincingly could be implemented using a combination of such processes. I don't know what causes qualia in my brain and so I'm not certain it would be captured by the emulation in question-- for example, the emulation might not be at a high enough level of detail, might not exploit quantum mechanics in the appropriate way, or whatever. Fading and dancing...
Would whole brain emulations would be conscious?
First, is this question meaningful? (eliminativists or others who think the OP makes an invalid assumption should probably say 'No' here, if they respond at all) [pollid:546]
If yes, what is your probability assignment? (read this as being conditioned on a yes to the above question - i.e: if there was uncertainty in your answer, don't factor it in to your answer to this question) [pollid:547]
And lastly, What is the probability that a randomly selected normally functioning human (Not sleeping, no neurological ...
I have no a priori reason why the same software would be more likely to be conscious running on one set of hardware than another. Any a posterori reason I could think of that I am conscious would be thought of by an em running on different hardward, and would be just as valid. As such, I can be no more sure that I am conscious on my current hardware than on any other hardware.
how sure are you that whole brain emulations would be conscious
The question is ill-formulated; everything depends on how we resolve this point from the LessWrong WBE wiki:
The exact level of detail required for an accurate simulation of a brain's mind is presently uncertain
For example, one "level of detail" hypothesis would state that everything about the process matters, right down to the detailed shapes of electrical fields in synapses, say. Which would probably require building synapses out of organic matter. Which brings us right back...
Any WBE could in theory be simulated by a mathematical function (as far as I can see). So what I really want to know is: can a mathematical function experience qualia? (and/or consciousness) By experience I mean that whatever experiencing qualia is to us it would have something directly analogous (e.g. if qualia is an illusion then the function has the same sort of illusion).
Conscious functions possible? Currently I'm leaning towards yes. If true, to me the implication would be that the "me" in my head is not my neurons, but the information encoded therein.
So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.
I'm not sure how to distinguish this from a person who goes around making discoveries but is too busy to savor and enjoy anything.
The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.
I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actu...
Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us.
That's not a hope. It's appealing to a magic genie to solve our problems. We really have to get out of the habit of doing that, or we'll never get anything done.
How sure are you that brain emulations would be conscious?
I don't know and I don't care. But if you were to ask me, "how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong", I'd say, "almost certain". If you were to ask me the follow-up question, "and therefore, should we grant them the same rights we grant biological humans", I'd also say "yes", though with a lower certainty, maybe somewhere around 0.9. There's a non-trivial chance that the emergence of non-biological humans would cause us to radically re-examine our notions of morality.
My initial response to the question in the title, without reading the article or any other comments:
About as sure as I am that other humans are conscious, which is to say ~98% (I tend to flinch away from thinking I'm the only conscious person in existence, but all I have to go on is that so far as I know, we're all using extremely similar hardware and most people say they're conscious, so they probably are.).
The trouble is that this is an outside view; I haven't the faintest idea what the inside view would be like. If a small portion of my brain was replac...
To my mind all such questions are related to arguments about solipcism, i.e. the notion that even other humans don't, or may not, have minds/consciousness/qualia. The basic argument is that I can only see behavior (not mind) in anyone other than myself. Most everyone rejects solipsism, but I don't know if there have actually many very good arguments against it, except that it is morally unappealing (if anyone know of any please point them out). I think the same questions hold regarding emulations, only even more so (at least with other humans we know th...
More sure than I am the original human were concious. (reasoning; going through the process might remove misconceptions about it not working, thus increasing self awareness past some threshold. This uterly dominates any probability of the uploading process going wrong.)
I can too easily imagine difficult to test possibilities like "brains need to be warm, high-entropy, and non-reversible to collect reality fluid. Cold, energy efficient brains possess little reality fluid" for me to be confident that ems are conscious (or as conscious) as humans.
At least one potential approach to defining consciousness is clear: build up faithful simulated nerve cell structures, from a single cell up, and observe what happens in a simulation vs biology. Eventually something similar to consciousness will likely emerge (and ask you to please stop the torture).
- Eliezer Yudkowsky, "Value is Fragile"
I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?
There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:
So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.
Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.
For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?
That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?