Epiphany comments on Open Thread, October 1-15, 2012 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (477)
Virtualization. I think if you are virtualized (uploaded to a computer, or copied into a new brain), you still die. I keep running into people on here who seem to think that if you copy someone, this prevents them from dying. It seems that I am in the minority on this one. Am I? Has this been thoroughly debated before? I would like to start a discussion on this. Good idea / bad idea tips on presentation?
I think the LW consensus is that the copy is also you, and personal identity as we think of it today will have to undergo significant change once uploads and copies become a thing.
Contemporary people are more or less completely bamboozled by the whole topic of minds, brains, and computers. It's like in the early days of language, when some people thought that reality was created by a divine breath speaking the true names of things, or that the alphabet existed before the universe alongside God, and so on. Language was the original information technology that was made into an idol and treated like magic because it seemed like magic. The current attitudes to computers and computation are analogous, except that we really can culture neurons and simulate them, so we are going to be creating hybrid entities even more novel, in evolutionary terms, than a primate with a verbalizing stream of consciousness (which was a hybrid of biology and language).
What is the computational paradigm of mind? Often this paradigm floats free of any material description at all, focusing solely on algorithms and information. But if we ask for a physical description of computation, it is as follows: There is an intricate physical object - a brain, a computer. Mostly it is scaffolding. There are also non-computational processes happening in it - blood circulating, fan spinning. But among all the physical events which happen inside this object, there are special localized events which are the elementary computations. A wave of depolarization travels along a cell membrane. The electrons in a transistor rearrange themselves in response to small voltages. In the intricate physical object, billions of these special events occur, in intricate trains of cause and effect. The computational paradigm of mind is that thought, self, experience, identity are all, in some sense, nothing but the pattern of these events.
These days it is commonly acknowledged that this supposed identity is somewhat mysterious or unobvious. I would go much further and say that almost everything that is believed and said about this topic is wrong, just like the language mysticism of an earlier age, but it has a hold on people's minds because the facts seem so obvious and they don't have any other way of conceiving of their own relationship to those facts. Yes, it's mysterious that mere ink on a page has such power over our minds and such practical utility, but the reality of that power and that utility are self-evident, therefore, in the beginning was the word, and the word was with God, and the word was God. Yes, it's mysterious that a billion separate little events of particles in motion could feel like being a person and being alive, but we know that the brain is made of neural circuitry and that we could in principle simulate it on any computing mechanism, therefore you are a program in your brain, and if we ran that program somewhere new, you would live again.
People try with varying degrees of self-awareness and epistemic modesty to be rational about their beliefs here, but mostly it's the equivalent of different schools of language mysticism, clashing over whether the meaning-essence only inhabits the voice, or whether it can be found in the written word too. In my estimation, what people say about consciousness, uploads, and personal identity, is similarly far from the reality of how anything works and of what we really are.
If we ever extend human understanding far enough to grasp the truth, it's going to be something bizarre - that you are a perspective vortex in your cortical quantum fields, something like that, something strange and hardly expressible with our current concepts. And meanwhile, we continue to develop our abilities to analyze the brain materially, to shape it and modify it, and to make computer hardware and software. Those abilities are like riding a bicycle, we can pick them up without really knowing what we are doing or why it works, and we're in a hurry to use those abilities too.
So most likely, that biolinguistic hybrid, the primate who thinks in words, is going to create its evolutionary successor without really understanding what it's doing, and perhaps even while it is possessed with a false understanding of what it is doing, a fundamentally untrue image of reality. That's what I see at work in these discussions of mind uploading and artificial intelligence: computational superstition coupled to material power. The power means that something will be done, this isn't just talk, there will be new beings; but the superstition means that there will be a false image of what is happening as it happens.
If you use the concepts of "dying" or "personal identity" in this context, you risk committing the noncentral fallacy, since uploading is an atypical case of their application, and their standard properties won't automatically carry over.
For example, concluding that an instance of you "actually dies" when there is also a recent copy doesn't necessarily imply that something bad took place, since even if you do in some sense decide that this event is an example of the concept of "dying", this is such an atypical example that its membership in that concept provides only very weak evidence for sharing the property of being bad with the more typical examples. Locating this example in the standard concepts is both difficult and useless, a wrong question.
The only way out seems to be to taboo the ideas of "dying", "personal identity", etc., and fall back on the arguments that show in what way typical dying is bad, and non-dying is good, by generalizing these arguments about badness of typical destruction of a person to badness of the less typical destruction of a copy, and goodness of not destroying a person to goodness of having a spare copy when another copy is destroyed.
It seems to me that the valuable things about a living person (we've tabooed the "essence of personal identity", and are only talking about value) are all about their abstract properties, their mind, their algorithm of cognition, and not about the low-level details of how these abstract properties are implemented. Since destruction of a copied person preserves these properties (implemented in the copy), the value implemented by them is retained. Similarly, one of the bad things about typical dying (apart from the loss of a mind discussed above) seems to be the event of terminating a mind. To the extent this event is bad in itself, copying and later destroying the original will be bad. If this is so, destructive uploading will be better than uploading followed by destruction of the conscious original, but possibly worse than pure copying without any destruction.
Almost everybody starts with the intuitive notion that uploading will kill the "real you". The discussion seems to have been treading the same ground since at least the 1990s, so I don't really expect anything new to come out of yet another armchair rehash.
Chapters 9 and 10 in David Chalmers' singularity paper are a resonably good overview of the discussion. Chalmers end up finding both stances convincing given different setups for a thought experiment, and remains puzzled about the question.
Really? I started with the assumption that uploading wouldn't necessarily be destructive, but people chose to discuss destructive uploading because it simplifies some of the philosophical questions. On second thought, there may also be a bias from science fiction, where promising developments are likely to have a horrific downside.
Yeah, assuming some sort of destructive upload in my comment there, naturally. My assumptions for the initial stance most people will have for the various scenarios are basically:
Non-destructive upload, the initial person remains intact, an exact upload copy is made: The "real you" is the original human, all that matters is whether real you lives or dies.
Destructive upload, the initial person gets knocked out and ground to pieces to make the exact upload copy: "Real you" dies from being ground to pieces, end of story.
Moravec transfer, the initial person's brain gets converted to a machine substrate one neuron at a time: People other than John Searle seem to be OK with personal continuity remaining in this scenario.
Also, embracing the possibility of nondestructive uploads requires us to think about our identities as potentially non-uniquely instantiated, which for a lot of people is emotionally challenging.
Questions to consider: Would you feel the same way about using a Star Trek transporter? What if you replaced neurons with computer chips one at a time over a long period instead of the entire brain at once? Is everyone in a constant state of "death" as the proteins that make up their brain degrade and get replaced?
The million dollar question: Do I stop experiencing?
If I were to be disassembled by a Star Trek transporter, I'd stop experiencing. That's death. If some other particles elsewhere are reassembled in my pattern, that's not me. That's a copy of me. Yes, I think a Star Trek transporter would kill me. Consider this: If it can assemble a new copy of me, it is essentially a copier. Why is it deleting the original version? That's a murderous copier.
I remember researching whether the brain is replaced with new cells over the course of one's life and I believe the answer to that is no. I forgot where I read that, so I can't cite it, but due to that, I'm not going to operate from the assumption that all of the cells in my brain are replaced over time.
However, if one brain cell were replaced in such a way that the new cell became part of me, and I did not notice the switch, my experiencing would continue, so that wouldn't be death. Even if that happened 100,000,000,000 times (or however many times would equate to a complete replacement of my brain cells) that wouldn't stop me from experiencing. Therefore, it's not a death - it's a transformation.
If my brain cells were transformed over time into upgraded versions, so long as my experience did not end, it would not be death. Though, it could be said to be a transformation - the old me no longer exists. Epiphany 2012 is not the same as Epiphany 1985 because I was a child then, but my neural connections are completely different now and I didn't experience that as death. Epiphany 2040 will be completely different from Epiphany 2012 in any case, just because I aged. If I decide to become a transhuman and the reason I am different at that time is because I've had my brain cells replaced one at a time in order to experience the transformation and result of it, then I have merely changed, not died.
It could be argued that if the previous you no longer exists, you're dead, but the me that I was when I was two years old or ten years old or the me I was when I was a zygote no longer exists - yet I am not dead. So the arguer would have to distinguish an intentional transformation from a natural one in a way that sets it apart as having some important element in common with death. All of my brain cells would be gone, in that scenario, but I'd say that's not a property of death, just a cause of death, and that not everything that could cause death always will cause death. Also, it is possible to replace brain cells as they die, in which case, the more appropriate perspective is that I was being continued, not replaced. Doing it that way would be a prevention of death, not a cause of death. I would not technically be human afterward, but my experience would continue, and the pattern known as me would continue (it is assumed that this pattern will transform in any case, so I don't see the transformation of the pattern as a definite loss - I'd only see it that way if I were damaged) so I would not consider it a death.
The litmus test question is not "Would the copy of me continue experiencing as if nothing had happened." the litmus test question is "Will I, the original, continue experiencing?"
Here are two more clarifying questions:
Imagine there's a copy of you. You are not experiencing what the copy is experiencing. It's consciousness is inaccessible to you the same way that a twin's consciousness would be. Now they want to disassemble you because there is a copy. Is that murder?
Imagine there's a copy of you. You've been connected to it via a wireless implant in your head. You experience everything it experiences. Now they want to disassemble you and let the copy take over. If all the particles in your head are disassembled except for the wireless implant, will you continue experiencing what it experiences, or quit experiencing all together?
I used to think this way. I stopped thinking this way when I realized that there are discontinuities in consciousness even in bog-standard meat bodies -- about one a day at minimum, and possibly more since no one I'm aware of has conclusively established that subjective conscious experience is continuous. (It feels continuous, but your Star Trek transporter-clone would feel continuity as well -- and I certainly don't have a subjective record of every distinct microinstant.)
These are accompanied by changes in physical and neurological state as well (not as dramatic as complete disassembly or mind uploading, but nonzero), and I can't point to a threshold where a change in physical state necessitates subjective death. I can't even demonstrate that subjective death is a coherent concept. Since all the ways I can think of of getting around this require ascribing some pretty sketchy nonphysical properties to the organization of matter that makes up your body, I'm forced to assume in the absence of further evidence that there's nothing in particular that privileges one discontinuity in consciousness over another. Which is an existentially frightening idea, but what can one do about it?
(SMBC touched on this once, too.)
What do you mean by discontinuities? I have not heard about this.
Sleep, total anesthesia, getting knocked on the head in the right way, possibly things like zoning out. Any time your subjective experience stops for a while.
Actually, I expect that our normal waking experience is also discontinuous, in much the same sense that our perception of our visual field is massively discontinuous. Human consciousness is not a plenum.
Yeah, I was trying to get at that with the parenthetical bit in my first paragraph. Could probably have been a bit more explicit.
Ok are you saying that temporarily going unconscious is the same as permanently going unconscious?
Would you assert that because we temporarily go unconscious that permanent unconsciousness is not death?
Temporarily going unconscious is not the same as permanently going unconscious.
Whether we temporarily go unconscious or not does not entail permanent unconsciousness being or not being death.
Now, some questions of mine: you said "If I were to be disassembled by a Star Trek transporter, I'd stop experiencing. That's death."
When you fall asleep, do you stop experiencing?
If so, is that death?
If it isn't death, is it possible that other things that involve stopping experiencing, like the transporter, are also not death?
We need to focus on the word "I" to see my point. I'm going to switch that out with something else to highlight this difference. For the original, I will use the word "Dave". As tempting as it is to use "TheOtherDave" for the copy, I am going to use something completely different. I'll use "Bob". And for our control, I will use myself, Epiphany.
Epiphany takes a nap. Her brain is still active but it's not conscious.
Dave decides to use a teleporter. He stands inside and presses the button.
The teleporter scans him and constructs a copy of him on a space ship a mile away.
The copy of Dave is called Bob.
The teleporter checks the copy of Bob before deleting Dave to make sure he was copied successfully.
Dave still exists, for a fraction of a second, just after Bob is created.
Both of them COULD go on existing, if the teleporter does not delete Dave. However, Dave is under the impression that he will become Bob once Bob exists. This isn't true - Bob is having a separate set of experiences. Dave doesn't get a chance to notice this because in only fractions of a second, the teleporter deletes Dave by disassembling his particles.
Dave's experience goes black. That's it. Dave doesn't even know he's dead because he has stopped experiencing. Dave will never experience again. Bob will experience, but he is not Dave.
Epiphany wakes up from her nap. She is still Epiphany. Her consciousness did not stop permanently like Dave's. She was not erased like Dave.
Epiphany still exists. Bob still exists. Dave does not.
The problem here is that Dave stopped experiencing permanently. Unlike Epiphany who can pick up where Epiphany left off after her nap because she is still Epiphany and was never disassembled, Bob cannot pick up where Dave left off because Bob never was Dave. Bob is a copy of Dave. Now that Dave is gone, Dave is gone. Dave stopped experiencing. He is dead.
Ah! So when you say "If I were to be disassembled by a Star Trek transporter, I'd stop experiencing" you mean "I'd [permanently] stop experiencing." I understand you now, thanks.
So, OK.
Suppose Dave decides to go to sleep. He gets into bed, closes his eyes, etc.
The next morning, someone opens their eyes.
How would I go about figuring out whether the person who opens their eyes is Dave or Bob?
No, temporary unconsciousness is not the same thing as permanent unconsciousness; you perceive yourself to return to consciousness. The tricky part is unpacking the "you" in that sentence. Conventionally it unpacks to a conscious entity, but that clearly isn't useful here because you (by any definition) aren't continuously conscious for the duration. It could also unpack to about fifty to a hundred kilos of meat, but whether we're talking about a transporter-clone or an ordinary eight hours of sleep, the meat that wakes up is not exactly the meat that goes unconscious. In any case, I'm having a hard time thinking of ways of binding a particular chunk of meat to a particular consciousness that end up being ontologically privileged without invoking something like a soul, which would strike me as wild speculation at best. So what does it unpack to?
It's actually very tricky to pin down the circumstances which constitute death, i.e. permanent cessation of a conscious process, once you start thinking about things like Star Trek transporters and mind uploading. I don't claim to have a perfect answer, but I strongly suspect that the question needs dissolving rather than answering as such.
Define "dying".
Elements of death:
There are a lot of elements to dying and if technology progresses far enough I think we could have incidents where some but not all of them happen. However, depending on what exactly happens, some of these should still be regarded as being just as bad as death.
Death of experience
Your experience of the world stops permanently.
This is important because you will never experience pleasure again if you stop experiencing permanently.
Death of self
Your personality, memories, etc, your "software pattern" cease to exist.
This is important because other people are attached to them and will be upset if they can't interact.
Death of genes
Your genetic material, your "hardware pattern", is lost. Your genetic line may die out.
This is unacceptable if you feel that it's an important purpose in life to reproduce.
Death of influence
It becomes impossible for you to consciously influence the world.
This is important because of things like the necessity of taking care of children or a goal to make a difference.
Death of body
Your body, or the current copy of your "hardware" becomes unusable.
This is important if your brain isn't somewhere else when it happens but may not be important otherwise.
There may be others. Can you think of more?
It's a good list. Now to define "you" and see if an upload fits into the definition and if so, how much of your list applies.
I am uploaded. A copy of my "self" is made (I believe this is the definition of "you" people are using when they're talking about uploading themselves) and the original is disassembled or dies of natural causes. That's all that was done. I'm assuming no other steps were taken to preserve any other element of me because it was believed that uploading me means I wouldn't die. I'll call the original Epiphany and the copy I'll call Valorie.
Epiphany:
Death of body - Check. Brain was in it? Check.
Death of experience - Check. (See previous note about my brain.)
Death of genes - Check. Pregnancy is impossible while dead. Genes were not copied.
Death of influence - Check. Upload was not incarnated.
Death of self - No. There is a copy.
Valorie:
Death of body - No body. It's just a copy.
Death of experience - Doesn't experience, it isn't being run, it's just a copy.
Death of genes - Doesn't have genes, a copy of my "self" is being stored in some type of memory instead of a body.
Death of influence - Cannot influence anything as a copy, especially if it is not being run.
Death of self - No. It's preserved.
Conclusion:
I am dead.
Of course it's not hard to imagine other scenarios where everything possible is copied and the copy is incarnated, but Epiphany would still stop experiencing, which is unacceptable, so I would still call this "dead".
I'm perfectly willing to accept that if you get uploaded and then nobody ever runs the upload then that's death. But if you're trying to give the idea a fair chance, I'm not sure why you're assuming this.
There's one really important detail here. If you get uploaded, even if the copy is put into a body exactly like yours and your genes are fully preserved and everything goes right, you still stop experiencing as soon as you die.
Is that acceptable to you?
Okay, I was pretty sure that was your real point, so I just wanted to confirm that and separate away everything else.
But to be honest, I don't have a real answer. It's definitely not obvious to me that I will stop experiencing in any real way, but I have a hard time dismissing this as well. One traditional answer is that "you will stop experiencing" is incoherent, and that continuity of experience is an illusion based on being aware of what you were thinking about a split second ago, among other things.
The continuation of experience argument is compelling if you consider my transporter malfunction scenario.
That is one situation that would definitely result in a discontinuation of experience.
Others which I have discussed with Saturn and TheOtherDave (a wonderfully ironic handle for this discussion) have resulted in my considering other possibilities like being re-assembled with the exact same particles in the same or different locations and being transformed over time via neuron replacement or similar.
I decided that being transformed would probably maintain continuity of experience, and being re-assembled out of the same particles in the exact same locations would probably result in continuity of experience (because I can't see that as a second instance), but I am not sure about it (because the same particles in different locations might not qualify as the same instance, which brings into question whether same instance guarantees continuous experience) and I'm having a hard time thinking of a clarifying question or hypothetical scenario to use for working it out. (It's all in the link right there).
What's not incoherent, though, is looking forward to experiencing something in the future, yet knowing you're going to be disassembled by a transporter and a copy of you will experience it instead. That, in no uncertain terms, is death. We can tell ourselves all day that having a continuous experience relies on you being able to connect your current thought and previous thought, but the real question we need to ask is "Will I have any thoughts at all?" so the connected thoughts question is a red herring (as it relates only to your second instance, not your first one) and is a poor clarifying question for telling whether you (the original) survived.
In coherent terms, what we should avoid is this:
Either way, only a copy of you will experience it, because the non-copy of you is trapped in the present and has no way to experience the future. The copy can be made artificially, using a transporter, or naturally as time passes. Why is there a difference?
So your definition of self stops at the physical body? Presumably mostly your brain? Would a partial brain prosthesis (say, to save someone's life after a head trauma) mimicking the function of the removed part make the recipient less of herself? Does it apply to the spinal cord? How about some of the limbic system? Maybe everything but the neocortex can be replaced without affecting "self"? Where do you put the boundary and why?
No. As I mentioned, "This (referring to Death of Body) is important if your brain isn't somewhere else when it happens but may not be important otherwise."
If you get into a good replacement body before the one you're in dies, you're fine.
If you want to live, a continuation of your experience is required. Not the creation of a new instance of the experience. But the continuation of my (this copy's) experience. That experience is happening in this brain, and if this brain goes away, this instance of the experience goes away, too. If there is a way to transfer this experience into something else (like by transforming it slowly, as Saturn and I got into) then Epiphany1's experience would be continued.
If Epiphany1's experience continues and my "self" is not significantly changed, no. That is not really a new instance. That's more like Epiphany1.2.
Not sure why these are relevant. Ok limbic system is sort of relevant. I'd still be me with a new spinal cord or limbic system, at least according to my understanding of them. Why do you ask? Maybe there's some complexity here I missed?
Hmmm. If my whole brain were replaced all at once, I'd definitely stop experiencing. If it were replaced one thing at a time, I may have a continuation of experience on Epiphany1, and my pattern may be preserved (there would be a transformation of the hardware that the pattern is in, but I expect my "self" to transform anyway, that pattern is not static).
I am not my hardware, but I am not my software either. I think we are both.
If my hardware were transformed over time such that my continuation of experience was not interrupted, then even if I were completely replaced with a different set of particles (or enhanced neurons or something) that as long as my "self pattern" wasn't damaged, I would not die.
I can't think of a way in which I could qualify that as "death". Losing my brain might be a cause of death, but just because something can cause something else doesn't mean it does in every instance. Heat applied to glass causes it to become brittle or melt and change form, destroying it. But we also apply heat to iron to get steel.
I'm trying to think of a metaphor that works for similar transformations... larva turns into a butterfly. A zygote turns into a baby, and a baby, into an adult. No physical parts are lost in those processes that I am aware of. I do vaguely remember something about a lot of neural connections being lost in early childhood... but I don't remember enough about that to go anywhere with it. The chemicals in my brain are probably replaced quite frequently, if the requirements for ingesting things like tryptophan are any indicator. Things like sugar, water and nutrients are being taken in, and byproducts are being removed. But I don't know what amount of the stuff in my skull is temporary. Hmm...
I want to challenge my theory in some way, but this is turning out to be difficult.
Maybe I will find something that invalidates this line of reasoning later.
You got anything?
So the "continuity of experience" is what you find essential for not-death? Presumably you would make exceptions for loss of consciousness and coma? Dreamless sleep? Anesthesia? Is it the loss of conscious experience that matters or what? Would a surgery (which requires putting you under) replacing some amount of your brain with prosthetics qualify as life-preserving? How much at once? Would "all of it" be too much?
Does the prosthetic part have to reside inside your brain, or can it be a machine (say, like a dialysis machine) that is wirelessly and seamlessly connected to the rest of your brain?
If it helps, Epiphany has implied elsewhere, I think, that when they talk about continuity of experience they don't mean to exclude experience interrupted by sleep, coma, and other periods of unconsciousness, as long as there's experience on the other end (and as long as the person doing that experiencing is the same person, rather than merely an identical person).
Right, it's her definition of "same" vs "identical" that I am trying to tease out. Well, the boundary between the two.
It seems like you both die and live. It also seems like there become two different versions of you.
If the original is deleted immediately; I don't think you die.
I think there no such mystery about pattern continuation. People just keep confused when the word "identity" come. If you really bother about these things, think in normal cases like you now and tomorrow, and find a flaw in the argument.