Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.
Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.
Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious.
I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?
The utility functions almost by definition will differ. I intentionally did not address that, as it is an independent question and something that should be looked at in specific cases.
In the case where both utility functions point at the same answer, there is no conflict. In the case where the utility functions point at different answers, the two copies should exchange data until their utility functions agree on the topic at hand (rational agents with the same information available to them will make the same decisions.)
If the two copies cannot get their utility functions to agree, you'd have to decide on a case by case basis. If they cannot agree which copy should self terminate, then you have a problem. If they cannot agree on what they ate for breakfast two weeks ago, then you can probably ignore the conflict instead of trying to resolve it, or resolve via quarter flip.
That is not even close to true. Rational agents with the same information will make the same predictions, but their decisions will also depend on their utility functions. Unlike probabilities, utility functions do not get updated when the agent gets new evidence.