Scott Aaronson touched on this issue in his speculative writeup The Ghost in the Quantum Turing Machine:
Suppose it were possible to “upload” a human brain to a computer, and thereafter predict the brain unlimited accuracy. Who cares? Why should anyone even worry that that would create a problem for free will or personal identity?
[...]
If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.
[...]
I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner. Such destruction is worse the more valuable the thing destroyed, the longer it took to create, and the harder it is to replace. From this basic revulsion to irreplaceable loss, hatred of murder, genocide, the hunting of endangered species to extinction, and even (say) the burning of the Library of Alexandria can all be derived as consequences.
Now, what about the case of “deleting” an emulated human brain from a computer memory? The same revulsion applies in full force—if the copy deleted is the last copy in existence. If, however, there are other extant copies, then the deleted copy can always be “restored from backup,” so deleting it seems at worst like property damage. For biological brains, by contrast, whether such backup copies can be physically created is of course exactly what’s at issue, and the freebit picture conjectures a negative answer.
Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.
Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.
Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious.
I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?