Larks comments on The Lifespan Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (214)
I'd argue that it's reasonable to place a $0 utility on my existence in other Everett branches; while theoretically I know they exist, theoretically there is something beyond the light-barrier at the edge of the visible universe. It's existence is irrelevant, however, since I will never be able to interact with it.
Perhaps a different way of phrasing this - say I had a duplicating machine. I step into Booth B, and then an exact duplicate is created in booths A and C, while the booth B body is vapourized. For reasons of technobabble, the booth can only recreate people, not gold bullion, or tasty filet mignons. I then program the machine to 'dissolve' the booth C version into three vats of the base chemicals which the human body is made up of, through an instantaneous and harmless process. I then sell these chemicals for $50 on ebay. (Anybody with enough geek-points will know that the Star Trek teleporters work on this principle).
Keep in mind that the universe wouldn't have differentiated into two distinct universes, one where I'm alive and one where I'm dead, if I hadn't performed the experiment (technically it would still have differentiated, but the two results would be anthropically identical). Does my existence in another Everett branch have moral significance? Suffering is one thing, but existence? I'm not sure that it does.
Evidence to support your idea- whenever I make a choice, in another branch, 'I' made a the other decision, so if I cared equally about all future versions of myself, the I'd have no reason to choose one option over another.
If correct, this shows I don't care equally about currently parallel worlds, but not that I don't care equally about future sub-branches from this one.
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn't close to deterministic are precisely the ones where I could just as easily have chosen the other way -- where I don't have any reason to pick one choice.)
Thus the thought experiment doesn't show that I don't care about all my branches, current and future, simply proportional to their measure.