Vladimir_Nesov comments on Rationalists lose when others choose - Less Wrong

-10 Post author: PhilGoetz 16 June 2009 05:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 16 June 2009 08:02:20PM *  17 points [-]

It's truly amazing just how much of the posts and discussions on LW you repeatedly ignore, Phil. There is a plurality opinion here that it can be rational to execute a strategy which includes actions that don't maximize utility when considered as one-shot actions, but such that the overall strategy does better.

I can genuinely understand disagreement on this proposal, but could you at least acknowledge that the rest of us exist and say things like "first-order rationality finds revenge irrational" or "altruistic sacrifices that violate causal decision theory" instead?

Comment author: Vladimir_Nesov 17 June 2009 03:55:51PM *  0 points [-]

"first-order rationality finds revenge irrational"

I'm not sure what you mean by "first order rationality". But whatever the definition, it seems that it's not first order rationality itself that finds revenge irrational, but your own judgment of value, that depends on preferences. An agent may well like hurting people who previously hurt it (people who have a property of having previously hurt it).

Comment author: orthonormal 17 June 2009 04:26:41PM *  1 point [-]

Huh— a Google search returns muddled results. I had understood first-order (instrumental) rationality to mean something like causal decision theory: that given a utility function, you extrapolate out the probable consequences of your immediate options and maximize the expected utility. The problem with this is that it doesn't take into account the problems with being modeled by others, and thus leaves you open to being exploited (Newcomblike problems, Chicken) or losing out in other ways (known-duration Prisoner's Dilemma).

I was also taking for granted what I assumed to be the setup with the revenge scenario: that the act of revenge would be a significant net loss to you (by your utility function) as well as to your target. (E.g. you're the President, and the Russians just nuked New York but promised to stop there if you don't retaliate; do you launch your nukes at Russia?)

Phil's right that a known irrational disposition towards revenge (which evolved in us for this reason) could have deterred the Russians from nuking NYC in the first place, whereas they knew they could get away with it if they knew you're a causal decision theorist. But the form of decision process I'm considering (optimizing over strategies, not actions, while taking into account others' likely decision algorithms given a known strategy for me) also knowably avenges New York, and thus deters the Russians.

EDIT: First paragraph was a reply to Vladimir's un-edited comment, in which he also asked what definition of first-order rationality I meant.