A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
Stuart's anthropic decision theory uses utility-counting rather than probabilities, so it does not satisfy the axioms for a utility maximizer. So there's a big difference between your typical decision problem, where you want to maximize utility, and the decision of anthropic decision theory, which does not.
Is there an elaborated critique of this paper/idea somewhere?