A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
To translate this into ADT terms: in problem 2, the coin is fair, in problem 1, the coin is (1/3, 2/3) on (H, T) (or maybe the coin was fair, but we got extra info that pushed the postiori odds to (1/3, 2/3)).
Then ADT (and SSA) says that selfish agents should bet up to 2/3 of candybar on Tails in problem 1, and 1/2 in problem 2. Exactly the same as what you were saying. I don't understand why you think that ADT would make identical choices in both problems.
The reason that's "exactly as I was saying" is because you adjusted a free parameter to fit the problem, after you learned the subjective probabilities. The free parameter was which world to regard as "normal" and which one to apply a correction to. If you already know that the (1/2, 1/4, 1/4) problem is the "normal" one, then you already solved the probability problem and should just maximize expected utility.