lackofcheese comments on "Solving" selfishness for UDT - Less Wrong

18 Post author: Stuart_Armstrong 27 October 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: lackofcheese 04 November 2014 12:06:57AM 1 point [-]

I think there are some rather significant assumptions underlying the idea that they are "non-relevant". At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they're indistinguishable then it's a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.

What's your proposal here?

Comment author: Stuart_Armstrong 04 November 2014 10:21:09AM 1 point [-]

the anthropic averaging I suggested in my previous comments leads to absurd results.

The anthropic averaging leads to absurd results only because it wasn't a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.