A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing.
Yes. My paper goes into the Doomsday problem, and what I basically show is that it's very hard to phrase it in terms of anthropic decision theory. To over-simplify, if you're selfish, there may be a doomsday problem, but you wouldn't care about it; if you're selfless, you'd care about it, but you wouldn't have one. You can design exotic utilities that allow the doomsday argument to go through, but they are quite exotic.
Read the paper to get an idea what I mean, then feel free to ask me any questions you want.
Stuart, thanks for contributing here. I've posted a couple of discussion threads myself on anthropic reasoning and the Doomsday Argument, and your paper has been mentioned in them, which caused me to read it.
I'm interested how you would like to apply ADT in Big World cases, where e.g. there are infinitely many civilizations of observers. Some of them expand off their home planets, others don't, and we are trying to estimate the fraction (limiting frequency) of civilizations that expand, when conditioning on the indexical evidence that we're now living on ... (read more)