WhySpace comments on Identity map - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
That's fine, but we should phrase the question differently. Instead of asking "Should I add my expectation that I will find myself in strange place?" we should ask "Will that future person, looking back, perceive that I (now) am its former self?"
As to the unrelated question of whether a poor or non-exact model is sufficient to indicate preservation of identity, there is much to consider about which aspects are important (physical or psychological) and how much precision is required in the model in order to deem it a preservation of identity -- but none of that has to do with the copy problem. The copy problem is orthogonal to the sufficient-model-precision question.
The copy problem arises in the first place because we are posing the question literally backwards (*). By posing the question pastward instead of futureward, the copy problem just vanishes. It becomes a misnomer in fact.
The precision problem then remains an entirely valid area of unrelated inquiry, but should not be conflated with the copy problem. One has absolutely nothing to do with the other.
(*): As I said, I'm not sure it's even proper to contemplate the status of nonexistent things, like future things. Does "having a status" require already existing? What we can do is consider what their status will be once they are in the present and can be considered by comparison to their own past (our present), but we can't ask if their "current" future status has some value relative to our current present, because they have no status to begin with.
The copy problem is also irrelevant for utilitarians, since all persons should be weighted equally under most utilitarian moral theories.
It's only an issue for self-interested actors. So, if spurs A and B both agree that A is C and B is C, that still doesn't help. Are the converse statements true? A selfish C will base their decisions on whether C is A.
I tend to view this as another nail in the coffin of ethical egoism. I lean toward just putting a certain value on each point in mind-space, with high value for human-like minds, and smaller or zero value on possible agents which don't pique our moral impulses.