You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Why (anthropic) probability isn't enough - Less Wrong Discussion

19 Post author: Stuart_Armstrong 13 December 2012 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: Luke_A_Somers 13 December 2012 09:01:56PM 0 points [-]

I drew the distinction earlier between subjective probability and betting behavior with a tale rather like the non-anthropic sleeping beauty table presented here.

It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability (SSA brings you to subjective probability, which must then be corrected to get proper bets; SIA gives you the right bets to make, which must be corrected to get the proper subjective probability)

Comment author: Stuart_Armstrong 13 December 2012 11:22:32PM 1 point [-]

To deal with these ideas correctly, you need to use anthropic decision theory.

The best current online version of this is on less wrong, split into six articles (I'm finishing up an improved version for hopefully publication):

http://lesswrong.com/search/results?cx=015839050583929870010%3A-802ptn4igi&cof=FORID%3A11&ie=UTF-8&q=anthropic+decision+theory&sa=Search&siteurl=lesswrong.com%2Flw%2Ffxb%2Fwhy_anthropic_probability_isnt_enough%2F81pw%3Fcontext%3D3&ref=lesswrong.com%2Fmessage%2Finbox%2F&ss=3562j568790j25

Comment author: Benja 13 December 2012 09:33:45PM *  1 point [-]

It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability

Doesn't the isomorphism between them only hold if your SSA reference class is exactly the set of agents responsible for your decision?

(This question is also for Stuart -- by the way, thanks for writing this, the exposition of the divided responsibility idea was useful!)

Comment author: Stuart_Armstrong 13 December 2012 11:26:30PM 0 points [-]

In the anthropic decision theory formalism (see the link I posted in answer to LukeASomers) SSA-like behaviour emerges from average utilitarianism (also selfish agents, but that's more complicated). The whole reference class complexity, in this context, is the complexity of deciding the class of agents that you average over.

Comment author: Benja 14 December 2012 12:10:49AM 1 point [-]

Yes, I haven't studied the LW sequence in detail, but I've read the arxiv.org draft, so I'm familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn't in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say

Finally, it should be noted that a lot of anthropic decision problems can be solved without needing to work out the anthropic probabilities and impact responsibility at all (see for instance the approach in (Armstrong, 2012)).

which sounds as if you're contrasting two different approaches in the techreport and in the draft, not as if they're both about the same thing?

[And sorry for misspelling you earlier -- corrected now, I don't know what happened there...]

Comment author: Stuart_Armstrong 14 December 2012 12:24:00AM 2 points [-]

What I really meant is - the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.

I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it's point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).

Comment author: Benja 14 December 2012 01:56:34AM 0 points [-]

Thanks!