A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
I was simplifying when I said "didn't care". And if there's negative utility around, things are different (I was envisaging the doomsday scenario as something along the lines of painless universal sterility). But let's go with your model, and say that doomsday will be something painful (slow civilization collapse, say). How will average and total altruists act?
Well, an average altruist would not accept an increase in the risk of doom in exchange for other gains. The doom is very bad, and would mean a small population, so the average badness is large. Gains in the case where doom doesn't happen would be averaged over a very large population, so would be less significant. The average altruist is willing to sacrifice a lot to avoid doom (but note this argument needs doom=small population AND bad stuff).
What about the total altruist? Well, they still don't like the doom. But for them the benefits in the "no doom" scenario are not diluted. They would be willing to run a slight increase in the risk of doom, in exchange of some benefit to a lot of people in the no-doom situation. They would turn on the reactor that could provide limitless free energy to the whole future of the human species, even if there was small risk of catastrophic meltdown.
So the fact these two would reason differently is not unexpected. But what I'm trying to get to is that there is no simple single "doomsday argument" for ADT. There are many different scenarios (where you need to specify the situation, the probabilities, the altruisms of the agents, and the decisions they are facing), and in some of them, something that resembles the classical doomsday argument pops up, and in others it doesn't.