Benja comments on Why (anthropic) probability isn't enough - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Doesn't the isomorphism between them only hold if your SSA reference class is exactly the set of agents responsible for your decision?
(This question is also for Stuart -- by the way, thanks for writing this, the exposition of the divided responsibility idea was useful!)
In the anthropic decision theory formalism (see the link I posted in answer to LukeASomers) SSA-like behaviour emerges from average utilitarianism (also selfish agents, but that's more complicated). The whole reference class complexity, in this context, is the complexity of deciding the class of agents that you average over.
Yes, I haven't studied the LW sequence in detail, but I've read the arxiv.org draft, so I'm familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn't in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say
which sounds as if you're contrasting two different approaches in the techreport and in the draft, not as if they're both about the same thing?
[And sorry for misspelling you earlier -- corrected now, I don't know what happened there...]
What I really meant is - the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.
I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it's point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).
Thanks!