I'm currently reading some papers various problems related to the anthropic problem and I'm planning to write up a post when I'm finished. To this end, it would be useful to know what position people hold so that I know what arguments I need to address. So what is your position on anthropics?

Various anthropic problems include the Sleeping Beauty problem, the Absent-minded driver, the Dr Evil problem, the Doomsday Argument, the Presumptuous Philosopher, the Sailor's Child, Fermi Paradox and the Argument from Fine-Tuning.

New Comment
7 comments, sorted by Click to highlight new comments since:

It seems that the recent debate has made it fairly clear that LW is totally split on these questions

Yeah, but I wanted to see where people stand after all these posts.

My position on anthropics is that anthropics is grounded in updateless decision theory, which AFAIK lead in practice to full non-indexical conditioning.

My position on anthropics is that anthropics is grounded in updateless decision theory,

Agreed.

which AFAIK lead in practice to full non-indexical conditioning.

It doesn't lead to that; what it leads to depends a lot on your utility function and how you value your copies: https://www.youtube.com/watch?v=aiGOGkBiWEo

Question: In your paper on Anthropic Decision Theory, you wrote "If the philosopher is selfish, ADT reduces to the SSA and the philosopher will bet at 1:1 odds" in reference to Presumptuous Philosopher problem. I don't quite see how that follows - it seems to assume that the philosopher automatically exists in both universes. But if we assume there are more philosophers in the larger universe, then there must be at least some philosophers who lack a counterpart in the smaller universe. So it seems that ADT only reduces to SSA when the philosopher identifies with only his physical self in the current universe AND (his physical self in the alternate universe OR exactly one other self if he doesn't exist in the smaller universe).

I'll pre-emptively note that obviously he can never observe himself not existing. The point is that in order for the token to be worth $0.50 in the dollar, there must be a version of him to buy a losing token in the counterfactual. We can't get from "If the actual universe is small, I exist" to "If actual universe is large, I would also exist in the counterfactual". Probably the easiest way to understand this is to pretend you have a button that will do nothing in the small universe, but shrink the universe killing everyone in the large universe who wouldn't have existed if the universe were small instead. There's no way to know that pressing the button won't kill you. On the other hand, if you do have this knowledge, then you have more knowledge than everyone else who doesn't know that is true about them.

I agree about this when you are reasoning about scenarios with uncertainty about the number of copies, but FNIC is about scenarios with certainty about the number of copies e.g. the variation on Sleeping Beauty where you know it's Monday.

IMO, it's a pretty minor issue, which shows our confusion about identity more than about probability. The notion of copies (and of resets) breaks a lot of intuition, but careful identification of propositions and payoffs removes the confusion.