A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
I intended "In both of these problems there are two worlds, "H" and "T," which have equal "no anthropics" probabilities of 0.5. "
In retrospect, my example of evidence (stopping some of the experiments) wasn't actually what I wanted, since an outside observer would notice it. In order to mess with anthropic probabilities in isolation you'd need to change the structure of coinflips and people-creation.
But you can't mess with the probabilities in isolation. Suppose I were an SIA agent, for instance; then you can't change my anthropic probabilities without changing non-anthropic facts about the world.