The question at hand is, is there some change to a problem that changes anthropic probabilities, but is guaranteed not to change ADT decisions?
Is there? It would require some sort of evidence that would change your own anthropic probabilities, but that would not change the opinion of any outside observer if they saw it.
For example, if my anthropic knowledge says that I'm an agent at a specific point in time, a change in how long Sleeping Beauty stays awake in different "worlds" will change how likely I am to find myself there overall.
Doesn't feel like that would work... if you remember how long you've been awake, that makes you into slightly different agents, and if the duration of the awakening gives you any extra info, it would show up in ADT too. And if you forget how long you're awake, that's just sleeping beauty with more awakenings...
Define "individual impact" as the belief that your own actions have no correlations with those of your copies (the belief your decisions control all your copies is "total impact"). Then ADT basically has the following equivalences:
If those equivalences are true, it seems that we cannot vary the anthropic probabilities without varying the ADT decision.
EDIT: Expanded first point a bit.
if you remember how long you've been awake, that makes you into slightly different agents, and if the duration of the awakening gives you any extra info, it would show up in ADT too.
Hm. One could try and fix it by splitting each point in time into different "worlds," like you suggest below. But the updating from time (let's assume there's no clock to look at, so the curves are smooth) would rely on the subjective probabilities, which you are avoiding. The update ratio is P(feels like 4 hours | heads) / P(f...
A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.