Normal_Anomaly comments on Rationality is Systematized Winning - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (252)
I agree with your two problems, but the problem with your alternative and so many others presented here is that it doesn't so strongly speak to the distinction which EY means to draw, between wanting to be seen to have followed the forms for maximising expected utility and actually seeking to maximise expected utility.
Also, of course, one who at each moment makes the decision that maximises expected future utility defects against Clippy in both Prisoner's Dilemma and Parfit's Hitchhiker scenarios, and arguably two-boxes against Omega, and by EY's definition that counts as "not winning" because of the negative consequences of Clippy/Omega knowing that that's what we do.
I think I'm misunderstanding you here because this looks like a contradiction. Why does making the decision that maximizes expected utility necessarily have negative consequences? It sounds like you're working under a decision theory that involves preference reversals.
I'm talking about the difference between CDT, which stiffs the lift-giver in Parfit's Hitchhiker and so never gets a lift, and other decision theories.
Oh, I see. I thought you were saying an optimal decision theory stiffed the lift-giver.
I hope I've become clearer in the four years since I wrote that!
. . . did not notice the date-stamp. Good thing thread necros are allowed here.