Morendil comments on What is Wei Dai's Updateless Decision Theory? - Less Wrong

37 Post author: AlephNeil 19 May 2010 10:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlephNeil 19 May 2010 01:59:05PM *  3 points [-]

The Sleeping Beauty scenario is problematic to discuss because it's posed as a question about probabilities rather than utilities. Let's consider Parfit's Hitchhiker instead. If you'd like some concrete numbers, suppose you get 0 utility if you're left in the desert, 10 if you're taken back to civilisation, but then lose 1 if you have to pay. So the utilities in the 'Util' boxes on my diagram are 9, 10, 0, in that order.

Now, if you have an opportunity to act at all, then you can say with certainty where you are in the tree-diagram: you're at the one-and-only Player node. This corresponds to "I've already been taken to my destination, and now I need to decide whether to pay the driver." Conditional upon being at that node, it's obvious that you maximise your utility by not paying (10 instead of 9).

However, if you make no assumptions about 'the state of the world' (i.e. whether or not you were offered a ride) and ask "Which of the two strategies maximizes my expected utility at the outset?" then the strategy where you pay up will get utility 9, and the one that doesn't will get 0.

So looking at the unconditional expected utility basically means that you deliberately 'forget' the information you have about where you are in the game and just look for "a strategy for the blue box" that will maximize your utility over many start-to-finish iterations of the game.

Comment author: Morendil 19 May 2010 04:12:41PM 0 points [-]

Let's consider Parfit's Hitchhiker instead.

I don't know where the probabilities are supposed to be in that graphical model, so I don't know how to apply my understanding of "expectation". I'm not even sure what I'm supposed to be uncertain about, so I'm not sure how to apply my understanding of "probability".

I don't know what the semantics of nodes and arrows are, either. Labeling the arrows and the "Util" boxes would help.

The Sleeping Beauty scenario is problematic to discuss

That might justify removing it from the OP, or at least moving it out of the critical path across the inferential distance.

because it's posed as a question about probabilities rather than utilities

It isn't clear how you can discuss expectations without discussing probabilities?

In the case of Newcomb's Problem - if Omega is only assumed to have some finite accuracy, say .9 - I can at least start to see how to make it about probabilities and expectations. I'll take a shot at it sometime.