One might then make an argument about the decision question that goes like this: Before I swore this oath, my probability of going to a tropical island was very low. After, it was very high. Since I really like tropical islands, this is a great idea. In a nutshell, I have increased my expected utility by making this oath.
If it is indeed in your power to swear and execute such an oath, then "I will make an oath to simulate this event and make such-and-such changes" is a legitimate event that would impact any probability calculation. Before swearing the oath, there was still the probability of you swearing it in the future and executing it.
The probability of going to a tropical island given that the oath was made is likely higher than it was before the oath was made, but the only way it would be significantly higher is if there was a very low probability of the oath being made in the first place.
This is identical to the problem with causal decision theory which goes "If determinism is true, I'm already certain to make my decision, so how can I worry about its causal impacts?"
The answer is that you swear the oath because you calculated what would happen if (by causal surgery) your decision procedure output something else. This calculation gets done regardless of determinism - it's just how this decision procedure goes.
I
When preferences are selfless, anthropic problems are easily solved by a change of perspective. For example, if we do a Sleeping Beauty experiment for charity, all Sleeping Beauty has to do is follow the strategy that, from the charity's perspective, gets them the most money. This turns out to be an easy problem to solve, because the answer doesn't depend on Sleeping Beauty's subjective perception.
But selfish preferences - like being at a comfortable temperature, eating a candy bar, or going skydiving - are trickier, because they do rely on the agent's subjective experience. This trickiness really shines through when there are actions that can change the number of copies. For recent posts about these sorts of situations, see Pallas' sim game and Jan_Ryzmkowski's tropical paradise. I'm going to propose a model that makes answering these sorts of questions almost as easy as playing for charity.
To quote Jan's problem: