A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
Many thanks for your comments; it's nice to have someone engaging with it.
That said, I have to disagree with you! You're right, the whole point was to avoid using "anthropic probabilities" (though not "subjective probabilities" in general; I may have misused the word "objective" in that context). But the terms "altruistic", "selfish" and so on, do correspond to actual utility functions.
"Selfless" means your utility function has no hedonistic content, just an arbitrary utility function over world states that doesn't care about your own identity. "Altruistic" means your utility function is composed of some amalgam of the hedonistic utilities of you and others. And "selfish" means your utility function is equal to your own hedonistic utility.
The full picture is more complex - as always - and in some contexts it would be best to say "non-indexical" for selfless and "indexical" for selfish. But be as it may, these are honest, actual utility functions that you are trying to maximise, not "A"'s over utility functions. Some might be "A"'s over hedonistic utility functions, but they still are genuine utilities: I am only altruistic if I actually want other people to be happy (or achieve their goals or similar); their happiness is a term in my utility function.
Then ADT can be described (for the non-selfish agents) coloquially as "before the universe was created, if i wanted to maximise U, what decision theory would I want any U-maximiser to follow? (ie what decision theory maximises the expected U in this context)". So ADT is doubly a utility maximising theory: first pick the utility maximising decision theory, then the agents with it will try and maximise utility in accordance with the theory they have.
(for selfish/indexical agents it's a bit more tricky and I have to use symmetry or "veil of ignorance" arguments; we can get back to that).
Furthermore, ADT can perfectly deal with any other type of uncertainty - such as whether you are or aren't in a Sleeping Beauty problem, or when you have partial evidence that you're Monday, or whatever. There's no need for it to restrict to the simple cases. Admittedly for the presumptuous philosopher, I restricted to a simple, with simple binary altruistic/selfish utilites, but that was for illustrative purposes. Come up with a more complex problem, with more complicated utilties, and ADT will give a correspondingly more complex answer.
Okay, let's look at the "selfish" anthropic preference laid out in your paper, in two different problems.
In both of these problems there are two worlds, "H" and "T," which have equal "no anthropics" probabilities of 0.5. There are two people you could be in T and one person you could be in H. Standard Sleeping Beauty so far.
However, because I like comparing things to utility, I'm going to specify two sets of probabilities. In Problem 1, the probability of being each person is 1/3. In Problem 2, the probability of... (read more)