I was recently arguing in /r/transhumanism on reddit about the viability of uploading/forking consciousness, and I realized I didn't have any method of assessing where someone's beliefs actually lay - where I might need to move them from if I wanted to convince them of what I thought.
So I made an intuition ladder. Please correct me if I made any mistakes (that aren't by design), and let me know if you think there's anything past the final level.
Some instructions on how to use this: Read the first level. If you notice something definitely wrong with it, move to the next level. Repeat until you come to a level where your intuition about the entire level is either "This is true" or "I'm not sure." That is your level.
1. Clones and copies (the result of a medical procedure that physically reproduces you exactly, including internal brain state) are the same thing. Every intuition I have about a clone, or an identical twin, applies one-to-one to copies as well, and vice versa. Because identical twins are completely different people on every level except genetically, copies are exactly the same way.
2. Clones and copies aren't the same thing, as copies had a brain and memories in common with me in the past, but for one of us those memories are false and that copy is just a copy, while my consciousness would remain with the privileged original.
3. Copies had a common brain and memories, which make them indistinguishable from each other in principle, so they believe they're me, and they're not wrong in any meaningful sense, but I don't anticipate waking up from any copying procedure in any body but the one I started in. As such, I would never participate in a procedure that claims to "teleport" me by making a copy at a new location and killing the source copy, because I would die.
4. Copies are indistinguishable from each other in principle, even from the inside, and thus I actually become both, and anticipate waking up as either. But once I am one or the other, my copy doesn't share an identity with me. Furthermore, if a copy is destroyed before I wake up from the procedure, I might die, or I might wake up as the copy that is still alive. As such, the fork-and-die teleport is a gamble for my life, and I would only attempt it if I was for some reason comfortable with the chance that I will die.
5. If a copy is destroyed during the procedure, I will wake up as the other one with near certainty, but this is a particular discrete consequence of how soon it's done. If one copy were to die shortly after, I wouldn't be less likely to wake up as that one or anything. I am therefore willing to fork-and-die teleport as long as the procedure is flawless. Furthermore, if I was instead backed up and copied from the backup at a later date, I would certainly wake up immediately after the procedure, and not anticipate waking up subjectively-immediately as the backup copy in the future.
6. I anticipate with less likelihood waking up as a copy that will die soon after the procedure - or for some other reason has a lower amplitude according to the Born rule - as a continuous function, and also it's entirely irrelevant when the copy is instantiated in my anticipation of what I experience, as long as the copy has the mind state I did when the procedure was done. However, consciousness can only transfer to copies made of me. I can never wake up as an identical mind state somewhere in the universe if it wasn't a result of copying, if such a thing were to exist, even in principle.
7. Continuity of consciousness is completely an artifact of mind state, including memory, and need not strictly require adjacency in spacetime at all. If, by some complete miraculous coincidence, in a galaxy far far away, a person exists at some time t' that is exactly identical to me at some time in my life t, in a way a copy made of me at t would be, at the moment t, I anticipate my consciousness transferring to that far away not-copy with some probability. The only reason this doesn't happen is the sheer unlikelihood of an exact mind state being duplicated, memories and all, by happenstance, anywhere in spacetime, even given the age of the universe from beginning to end. However, my consciousness can only be implemented on a human brain, or something that precisely mimics its internal structure.
8. Copies of me need not be or even resemble a human being. I am just an algorithm, and the hardware I am implemented on is irrelevant. If it's done on a microchip or a human brain, any implementation of me is me. However, simulations aren't truly real, so an implementation of me in a simulated world, no matter how advanced, isn't actually me or conscious to the extent I am in the reality I know.
9. Implementations of me can exist within simulations that are sufficiently advanced to implement me fully. If a superintelligence who is able to perfectly model human minds is using that ability to consider what I would do, their model of me is me. Indeed, the only way to model me perfectly is to implement me.
10. In progress, see Dacyn's comment below.
Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?
If your answers to those questions are "well above 50%" and "yes," why is it so difficult to answer the question:
?
It seems to me that you're separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:
A: Is your husband at home right now?
B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.
vs.
A: Is your husband at home right now?
B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.
Neither of these are "is", i.e. direct, contemporaneous, empirical observation. They are both "has to be", i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.
I submit that my "has to be" is of the latter type, but even more airtight.
I concede that this is all hypothesis, but it is of the same sort as "the Higgs Boson exists, or else we're wrong about a lot of things"... before we found it.