lackofcheese comments on "Solving" selfishness for UDT - Less Wrong

18 Post author: Stuart_Armstrong 27 October 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: lackofcheese 31 October 2014 02:25:01PM *  1 point [-]

I don't think that's entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don't exist, as do many other anthropic approaches.

Personally I'm generally skeptical of averaging over agents in any utility function.

Comment author: Stuart_Armstrong 04 November 2014 10:23:32AM 1 point [-]

SSA, for example, is

Which is why I don't use anthropic probability, because it leads to these kinds of absurdities. The halfer position is defined in the top post (as is the thirder), and your setup uses aspects of both approaches. If it's incoherent, then SSA is incoherent, which I have no problem with. SSA != halfer.

Comment author: Stuart_Armstrong 03 November 2014 05:06:22PM 1 point [-]

Averaging makes a lot of sense if the number of agents is going to be increased and decreased in non-relevant ways.

Eg: you are an upload. Soon, you are going to experience eating a chocolate bar, then stubbing your toe, then playing a tough but intriguing game. During this time, you will be simulated on n computers, all running exactly the same program of you experiencing this, without any deviations. But n may vary from moment to moment. Should you be willing to pay to make n higher during pleasant experience or lower during unpleasant ones, given that you will never detect this change?

Comment author: lackofcheese 04 November 2014 12:06:57AM 1 point [-]

I think there are some rather significant assumptions underlying the idea that they are "non-relevant". At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they're indistinguishable then it's a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.

What's your proposal here?

Comment author: Stuart_Armstrong 04 November 2014 10:21:09AM 1 point [-]

the anthropic averaging I suggested in my previous comments leads to absurd results.

The anthropic averaging leads to absurd results only because it wasn't a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.