Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.
Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.
Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious.
I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?
I have come to regard the core of personal identity as the portion of our utility function that dictates how desirable changes to the information and mental processes in our brains are (or, in a posthuman future, how desirable the changes whatever other substrate they are running on is). I think this conception can capture pretty much all our intuitions about identity without contradicting reductionist accounts of how our minds work. You just need to think of yourself as an optimization process that changes in certain ways, and rank the desirability of these changes.
When asking whether someone is the "same person" as you, you can simply replace the question with "Is this optimization process still trying to optimize the same things as it was before?," "Would my utility function consider changing into this optimization process to be a positive thing?," and "Would my utility function consider changing into this process to be significantly less bad than being vaporized and being replaced with a new optimization process created from scratch?"
For the issue of subjective experience, when asking the question "What shall happen to me in the future?" you can simply taboo words like "I" and "me" and replace them with "this optimization process" and "optimization-process-that-it-would-be-desirable-to-turn-into." Then rephrase the question as, "What will this optimization process turn into in the future, and what events will impact the optimization-process-that-it-would-be-desirable-to-turn-into?"
Similarly, the question "If I do this, what do I expect to experience as a result?" can translate into "If this process affects the world in some fashion, how will this affect what the process will change into in the future?" We do not need any ontologically fundamental sense of self to have subjective experience, as Kaj_Sotala seems to assert, "What will happen to the optimization-process-that-it-would-be-desirable-to-turn-into that this process will turn into?" yields the same result as "What will happen to me?"
Having one's identity be based on the desirability of changes rather than lack of change, allows us to get around the foolish statement that "You change all the time so you're not the same person you were a second ago." What matters isn't the fact that we change, what matters is how desirable those changes are. Here are some examples:
Acquiring memories of positive experiences by means of those experiences happening to the optimatization process: GOOD
Making small changes to the process' personality so that the process is better at achieving it's values: VERY GOOD
Acquiring new knowledge: GOOD
Losing positive memories: BAD
Being completely disintegrated: VERY BAD
Having the process' memory, personality, and values radically changed: VERY BAD
Changing the process' personality so that it attempts to optimize for the opposite of it's current values: EXTREMELY BAD
Some people seem to believe that having a reductionist conception of personal identity somehow makes death seem less bad. I disagree with this completely. Death seems just as bad to me as it did before. The optimization process I refer to as "me" has certain ways that it wants to change in the future and having all of its memories, personalities, and values being completely erased seems like a very bad thing.
And this is not a purely selfish preference. I would prefer that no other processes be changed in a fashion they do not desire either. Making sure that existing optimization processes change in ways that they find desirable is much more important to me than creating new optimization processes.
Or, to put it in less clunky terminology, Death is Bad, both for me and other people. It is bad even if new people are created to replace the dead ones. Knowing that I am made of atoms instead of some ontologically fundamental "self" does not change that at all.
I suspect that the main reason people think having a reductionist concept of personal identity makes personal identity less morally important is that describing things in a reductionist fashion can shut off our ability to make proper judgements about them. For instance, saying "Bob stimulated Alice's nociceptors by repeatedly parting her epidermis with a thin metallic plane." doesn't sound as bad as saying "Bob tortured Alice by cutting her with a knife." But in surely no one would argue that torture is less bad because of that.