You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Anthropic decision theory for selfish agents - Less Wrong Discussion

8 Post author: Beluga 21 October 2014 03:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: Manfred 21 October 2014 08:48:09PM *  1 point [-]

Needless to say that all the bold statements I'm about to make are based on an "inside view". [...]

Spare us :P Not only are Stuart's advantages not really that big, but it's worthwhile to discuss things here. Something something title of this subreddit.

The consensus view on LW seems to be that much of the SSA vs. SIA debate is confused and due to discussing probabilities detached from decision problems of agents with specific utility functions.

Hm, this makes me sad, because it means I've been unsuccessful. I've been trying to hammer on the fact that an agent's probability assignments are determined by the information it has. Since SSA and SIA describe pieces of information ("being in different worlds are mutually exclusive and exhaustive events" and "being different people are mutually exclusive and exhaustive events"), quite naturally they lead to assigning different probabilities. If you specify what information your agent is supposed to have, this will answer the question of what probability distribution to use.

Comment author: Stuart_Armstrong 22 October 2014 01:04:16PM 0 points [-]

Not only are Stuart's advantages not really that big

My advantages might be bigger than you think... oops, I've just been informed that this is not actually a penis-measuring competition, but an attempt to get at a truth. ^_^ Please continue.