I
When preferences are selfless, anthropic problems are easily solved by a change of perspective. For example, if we do a Sleeping Beauty experiment for charity, all Sleeping Beauty has to do is follow the strategy that, from the charity's perspective, gets them the most money. This turns out to be an easy problem to solve, because the answer doesn't depend on Sleeping Beauty's subjective perception.
But selfish preferences - like being at a comfortable temperature, eating a candy bar, or going skydiving - are trickier, because they do rely on the agent's subjective experience. This trickiness really shines through when there are actions that can change the number of copies. For recent posts about these sorts of situations, see Pallas' sim game and Jan_Ryzmkowski's tropical paradise. I'm going to propose a model that makes answering these sorts of questions almost as easy as playing for charity.
To quote Jan's problem:
It's a cold cold winter. Radiators are hardly working, but it's not why you're sitting so anxiously in your chair. The real reason is that tomorrow is your assigned upload, and you just can't wait to leave your corporality behind. "Oh, I'm so sick of having a body, especially now. I'm freezing!" you think to yourself, "I wish I were already uploaded and could just pop myself off to a tropical island."
And now it strikes you. It's a weird solution, but it feels so appealing. You make a solemn oath (you'd say one in million chance you'd break it), that soon after upload you will simulate this exact scene a thousand times simultaneously and when the clock strikes 11 AM, you're gonna be transposed to a Hawaiian beach, with a fancy drink in your hand.
It's 10:59 on the clock. What's the probability that you'd be in a tropical paradise in one minute?
Just as computer programs or brains can split, they ought to be able to merge. If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them. In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program. (This seems to establish an equal causal dependency of the final program on the two original programs that went into it. E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)
Not if the copy doesn't anticipate dying. Perhaps all the copies go thru a brief dim-witted phase of warm happiness (and the original expects this), in which all they can think is "yup warm and happy, just like I expected", followed by some copies dying and others recovering full intellect and living. Any of those copies is someone I'd "like to be" in the better-than-nothing sense. Is the caveat "like to be" a stronger sense?
I'm confused - if agents don't value their past self, in what sense do they agree or disagree with what the past-self was valuing? In any case, please reverse the order of the Methuselah valuing of time-slices.
Edit: Let me elaborate a story to motivate my some-copies-dying posit. I want to show that I'm not just "gaming the system," as you were concerned to avoid using your caveat.
I'm in one spaceship of a fleet of fast unarmed robotic spaceships. As I feared but planned for, an enemy fleet shows up. This spaceship will be destroyed, but I can make copies of myself in one to all of the many other ships. Each copy will spend 10 warm-and-fuzzy dim-witted minutes reviving from their construction. The space battle will last 5 minutes. The spaceship at the farthest remove from the enemy has about a 10% chance of survival. The next-farthest has a 9 point something percent chance - and so on. The enemy uses an indeterministic algorithm to chase/target ships, so these probabilities are almost independent. If I copy to all the ships in the fleet, I have a very high probability of survival. But the maximum average expected utility is gotten by copying to just one ship.
I'm tapping out, sorry.