Is there an elaborated critique of this paper/idea somewhere?
I don't think so. But I can give you the critique here :)
SUMMARY: If you don't actually calculate expected utility, don't expect to automatically make choices that correspond to a relevant utility function. Also, don't name a specific function for a general feeling - you might accidentally start calling the function when you just mean the feeling, and then you get really wrong answers.
-
I've already made the obvious criticism - since Stuart's anthropic decision theory is basically a way to avoid using subjective probabilities (also known as "probabil...
A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.