A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
It is exactly one of those probabilities.
Can you spell out the full setup?