I was recently arguing in /r/transhumanism on reddit about the viability of uploading/forking consciousness, and I realized I didn't have any method of assessing where someone's beliefs actually lay - where I might need to move them from if I wanted to convince them of what I thought.
So I made an intuition ladder. Please correct me if I made any mistakes (that aren't by design), and let me know if you think there's anything past the final level.
Some instructions on how to use this: Read the first level. If you notice something definitely wrong with it, move to the next level. Repeat until you come to a level where your intuition about the entire level is either "This is true" or "I'm not sure." That is your level.
1. Clones and copies (the result of a medical procedure that physically reproduces you exactly, including internal brain state) are the same thing. Every intuition I have about a clone, or an identical twin, applies one-to-one to copies as well, and vice versa. Because identical twins are completely different people on every level except genetically, copies are exactly the same way.
2. Clones and copies aren't the same thing, as copies had a brain and memories in common with me in the past, but for one of us those memories are false and that copy is just a copy, while my consciousness would remain with the privileged original.
3. Copies had a common brain and memories, which make them indistinguishable from each other in principle, so they believe they're me, and they're not wrong in any meaningful sense, but I don't anticipate waking up from any copying procedure in any body but the one I started in. As such, I would never participate in a procedure that claims to "teleport" me by making a copy at a new location and killing the source copy, because I would die.
4. Copies are indistinguishable from each other in principle, even from the inside, and thus I actually become both, and anticipate waking up as either. But once I am one or the other, my copy doesn't share an identity with me. Furthermore, if a copy is destroyed before I wake up from the procedure, I might die, or I might wake up as the copy that is still alive. As such, the fork-and-die teleport is a gamble for my life, and I would only attempt it if I was for some reason comfortable with the chance that I will die.
5. If a copy is destroyed during the procedure, I will wake up as the other one with near certainty, but this is a particular discrete consequence of how soon it's done. If one copy were to die shortly after, I wouldn't be less likely to wake up as that one or anything. I am therefore willing to fork-and-die teleport as long as the procedure is flawless. Furthermore, if I was instead backed up and copied from the backup at a later date, I would certainly wake up immediately after the procedure, and not anticipate waking up subjectively-immediately as the backup copy in the future.
6. I anticipate with less likelihood waking up as a copy that will die soon after the procedure - or for some other reason has a lower amplitude according to the Born rule - as a continuous function, and also it's entirely irrelevant when the copy is instantiated in my anticipation of what I experience, as long as the copy has the mind state I did when the procedure was done. However, consciousness can only transfer to copies made of me. I can never wake up as an identical mind state somewhere in the universe if it wasn't a result of copying, if such a thing were to exist, even in principle.
7. Continuity of consciousness is completely an artifact of mind state, including memory, and need not strictly require adjacency in spacetime at all. If, by some complete miraculous coincidence, in a galaxy far far away, a person exists at some time t' that is exactly identical to me at some time in my life t, in a way a copy made of me at t would be, at the moment t, I anticipate my consciousness transferring to that far away not-copy with some probability. The only reason this doesn't happen is the sheer unlikelihood of an exact mind state being duplicated, memories and all, by happenstance, anywhere in spacetime, even given the age of the universe from beginning to end. However, my consciousness can only be implemented on a human brain, or something that precisely mimics its internal structure.
8. Copies of me need not be or even resemble a human being. I am just an algorithm, and the hardware I am implemented on is irrelevant. If it's done on a microchip or a human brain, any implementation of me is me. However, simulations aren't truly real, so an implementation of me in a simulated world, no matter how advanced, isn't actually me or conscious to the extent I am in the reality I know.
9. Implementations of me can exist within simulations that are sufficiently advanced to implement me fully. If a superintelligence who is able to perfectly model human minds is using that ability to consider what I would do, their model of me is me. Indeed, the only way to model me perfectly is to implement me.
10. In progress, see Dacyn's comment below.
Sure. Basically, this is the problem:
Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.
Tangentially:
To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.
Consider:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):
A: Is your husband at home right now?
B: Yes, he is.
A: You know that he’s at home?
B: Well… no. But he has to be at home.
A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?
B: No, I didn’t.
And so on. (I imagine you could easily come up with innumerable other examples.)