endoself comments on Your existence is informative - Less Wrong

2 Post author: KatjaGrace 30 June 2012 02:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 12 July 2012 09:53:00PM *  2 points [-]

Simpler in that you don't need to transform it before it is useful here.

Standard expected utility maximization requires a probability distribution, but the problem is that in anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it. ADT uses the prior distribution before 'observing one's own existence', so it circumvents the need to preform anthropic updates.

I'm not sure which solution to your candybar problem you think is correct because I am not sure which probability distribution you think is correct, but all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.

Comment author: Manfred 13 July 2012 05:33:20AM *  1 point [-]

Standard expected utility maximization requires a probability distribution, but the problem is that is anthropic scenarios it is not obvious what the correct distribution is and how to correctly update it.

If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?

all the solutions in the paper that disagree with yours actually are what you would want to precommit to given the associated utility function and are therefore correct.

I think you're just making up utility functions here - is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function? And we're talking about regular ol' utility functions, why are ADT's decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?

Comment author: endoself 24 July 2012 09:02:11PM *  1 point [-]

If it was solved in a way that made it obvious for, say, the Sleeping Beauty problem, would that then be the right way to do it?

I would tentatively agree. To some extent the problem is one of choosing what it means for a distribution to be correct. I think that this is what Stuart's ADT does (though I don't think it's a full solution to this).

You would also still need to account for acausal influence. Just picking a satisfactory probability distribution doesn't ensure that you will one box on Newcomb's problem, for example.

I think you're just making up utility functions here - is a real utility function (that is, a function of the state of the world) ever calculated in the paper, other than the use of the individual utility function?

Is this quote what you had in mind? It seems like calculating a utility function to me, but I'm not sure what you mean by "other than the use of the individual utility function".

In the tails world, future copies of myself will be offered the same deal twice. Any profit they make will be dedicated to hugging orphans/drowning kittens, so from my perspective, profits (and losses) will be doubled in the tails world. If my future copies will buy the coupon for £x, there would be an expected £0.5(2 × (−x + 1) + 1 × (−x + 0)) = £(1 − 3/2x) going towards my goal. Hence I would want my copies to buy whenever x < 2/3.

That is from page 7 of the paper.

And we're talking about regular ol' utility functions, why are ADT's decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?

They're not necessarily invariant under such changes. All the examples in the paper were, but that's because they all used rather simple utility functions.

Comment author: Manfred 25 July 2012 12:15:21AM *  1 point [-]

And if we're talking about regular ol' utility functions, why are ADT's decisions necessarily invariant under changing time-like uncertainty (normal sleeping beauty problem) to space-like uncertainty (sleeping beauty problem with duplicates)?

They're not necessarily invariant under such changes. All the examples in the paper were, but that's because they all used rather simple utility functions.

Hm, yes, you're right about that.

Anyhow, I'm done here - I think you've gotten enough repetitions of my claim that if you're not using probabilities, you're not doing expected utility :) (okay, that was an oversimplification)