I don't acknowledge an upload as "me" in any meaningful sense of the term; if I copied my brain to a computer and then my body was destroyed, I still think of that as death and would try to avoid it.
A thought struck me a few minutes ago that seems like it might get around that, though. Suppose that rather than copying my brain, I adjoined it to some external computer in a kind of reverse-Ebborian act; electrically connecting my synapses to a big block of computrons that I can consciously perform I/O to. Over the course of life and improved tech, that block expands until, as a percentage, most of my thought processes are going on in the machine-part of me. Eventually my meat brain dies -- but the silicon part of me lives on. I think I would probably still consider that "me" in a meaningful sense. Intuitively I feel like I should treat it as the equivalent of minor brain damage.
Obviously, one could shorten the period of dual-life arbitrarily and I can't point to a specific line where expanded-then-contracted-consciousness turns into copying-then-death. The line that immediately comes to mind is "whenever I start to feel like the technological expansion of my mind is no longer an external module, but the main component," but that feels like unjustified punting.
I'm curious what other people think, particularly those that share my position on destructive uploads.
---
Edited to add:
Compare a destructive upload to non-destructive. Copy my mind to a machine non-destructively, and I still identify with meat-me. You could let machine-me run for a day, or a week, or a year, and only then kill off meat-me. I don't like that option and would be confused by someone who did. Destructive uploads feel like the limit of that case, where the time interval approaches zero and I am killed and copied in the same moment. As with the case outlined above, I don't see a crossed line where it stops being death and starts being transition.
An expand-contract with interval zero is effectively a destructive upload. So is a copy-kill with interval zero. So the two appear to be mirror images, with a discontinuity at the limit. Approach destructive uploads from the copy-then-kill side, and it feels clearly like death. Approach them from the expand-then-contract side, and it feels like continuous identity. Yet at the limit between them they turn into the same operation.
Mere stipulation secures very little though. Consider the following scenario: I start wearing a medallion around my neck and stipulate that, so long as these medallion survives intact, I am to be considered alive, regardless of what befalls me. This is essentially equivalent to what you'd be doing in stipulating survival in the uploading scenario. You'd secure 'survival', perhaps, but the would-be uploader has a lot more work to do. You need also to stipulate that when the upload says "On my 6th birthday..." he's referring to your 6th birthday, etc. I think this project will prove much more difficult. In general, these sort of uploading scenarios are relying on the notion of something being "transferred" from the person to the upload, and it's this that secures identity and hence reference. But if you're willing to concede that nothing is transferred - that identity isn't transferrable - then you've got a lot of work to do in order to make the uploading scenario consistent. You've got to introduce revised versions of concepts of identity, memory, self-reference, etc. Doing so consistently is likely a formidable task.
I should have said this about the artificial brain transplant scenario too. While I think the scenario makes sense, it doesn't secure all the traditional science fiction consequences. So having an artificial brain doesn't automatically imply you can be "resleeved" if your body is destroyed, etc. Such scenarios tend to involve transferrable identity, which I'm denying. You can't migrate to a server and live a purely software existence; you're not now "in" the software. You can see the problems of reference in this scenario. For example, say you had a robot on Mars with an artificial brain with the same specifications as your own. You want to visit Mars, so you figure you'll just transfer the software running on your artificial brain to the robot and wake up on Mars. But again, this assumes identity is transferrable in some sense, which it is not. But you might think that this doesn't matter. You don't care if it's you on Mars, you'll just send your software and bring it back, and then you'll have the memories of being on Mars. This is where problems of reference come in, because "When I was on Mars..." would be false. You'd have at best a set of false memories. This might not seem like a problem, you'll just compartmentalise the memories, etc. But say the robot fell in love on Mars. Can you truly compartmentalise that? Memories aren't images you have stored away that you can examine dispassionately, they're bound up with who you are, what you do, etc. You would surely gain deeply confused feelings about another person, engage in irrational behaviour, etc. This would be causing yourself a kind of harm; introducing a kind of mental illness.
Now, say you simply begin stipulating "by 'I' I mean...", etc, until you've consistently rejiggered the whole conceptual scheme to get the kind of outcome the uploader wants. Could you really do this without serious consequences for basic notions of welfare, value, etc? I find this hard to believe. The fact that the Mars scenario abuts issues of value and welfare suggests that introducing new meanings here would also involve stipulating new meanings for these concepts. This then leads to a potential contradiction: it might not be rationally possible to engage in this kind of revisionary task. That is, from your current position, performing such a radical revision would probably count as harmful, damaging to welfare, identity destroying, etc. What does this say about the status of the revisionary project? Perhaps the revisionist would say, "From my revisionary perspective, nothing I have done is harmful." But for everyone else, he is quite mad. Although I don't have a knockdown argument against it, I wonder if this sort of revisionary project is possible at all, given the strangeness of having two such unconnected bubbles of rationality.
No, and that is the point. There are serious drawbacks of the usual notions of welfare, at least in the high-tech future we are discussing, and they need serious correcting. Although, as I mentioned earlier, coining new words for the new concepts would probably facilitate comm... (read more)