Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

James_Miller comments on Rationality is Systematized Winning - Less Wrong

48 Post author: Eliezer_Yudkowsky 03 April 2009 02:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (252)

You are viewing a single comment's thread.

Comment author: James_Miller 03 April 2009 04:29:14PM *  4 points [-]

If humans are imperfect actors then in situations (such as a game of chicken) in which it is better to (1) be irrational and seen as irrational then it is to (2) be rational and seen as rational

then the rational actor will lose.

Of course holding constant everyone else's beliefs about you, you always gain by being more rational.

Comment author: Eliezer_Yudkowsky 03 April 2009 06:00:23PM 3 points [-]

Given that I one-box on Newcomb's Problem and keep my word as Parfit's Hitchhiker, it would seem that the rational course of action is to not steer your car even if it crashes (if for some reason winning that game of chicken is the most important thing in the universe).

Comment author: James_Miller 03 April 2009 07:12:53PM 4 points [-]

You are playing chicken with your irrational twin. Both of you would rather survive than win. Your twin, however, doesn't understand that it's possible to die when playing chicken. In the game your twin both survives and wins whereas you survive but lose.

Comment author: Aurini 03 April 2009 08:06:39PM *  1 point [-]

Then you murder the twin prior to the game of chicken, and fake his suicide. Or you intimidate the twin, using your advanced rational skills to determine how exactly to best fill them with fear and doubt.

But before murdering or risking an uncertain intimidation feint, there's another question you need to ask yourself. How certain are you that the twin is irrational? The Cold War was (probably) a perceptual error; neither side realized that they were in a prisoners dilemma, they both assumed that the other side preferred "unbalanced armament" over "mutual armament" over "mutual disarmament;" in reality, the last two should have been switched.

Worst case scenario? You die playing chicken, because the stakes were worth it. The Rational path isn't always nice.

(There are some ethical premises implicit in this argument, premises which I plan to argue are natural derivatives from Game Theory... but I'm still working on that article.)

Comment author: rwallace 03 April 2009 07:21:48PM 0 points [-]

My answer to that one is that I don't play chicken in the first place unless the stake is something I'm prepared to die for.

Comment author: James_Miller 03 April 2009 07:27:07PM 5 points [-]

There are lots of chicken like games that don't involve death. For example, your boss wants some task done and either you or a co-worker can do it. The worst outcome for both you and the co-worker is for the task to not get done. The best is for the other person to do the task.

Comment author: rwallace 03 April 2009 07:30:37PM 2 points [-]

My answer still applies - I'm not going to make a song and dance about who does it, unless the other guy has been systematically not pulling his weight and it's got to the point where that matters more to me than this task getting done.

Comment author: Jonathan_Graehl 03 April 2009 09:24:31PM 3 points [-]

For Newcomb's Problem, is it fair to say that if you believe the given information, the crux is whether you believe it's possible (for Omega) to have a 99%+ correct prediction of your decision based on the givens? Refusal to accept that seems to me the only justification for two-boxing. Perhaps that's a sign that I'm less tied to a fixed set of "rationalist" procedures than a perfect rationalist would be, but I would feel like I were pretending to say otherwise.

I also wonder if the many public affirmations I've heard of "I would one-box Newcomb's Problem" are attempts at convincing Omega to believe us in the unlikely event of actually encountering the Problem. It does give a similar sort of thrill to "God will rapture me to heaven."

Comment author: rwallace 03 April 2009 06:39:43PM 1 point [-]

+1 for "Rationalists win". What is Parfit's Hitchhiker? I couldn't find an answer on Google.

Comment author: grobstein 03 April 2009 07:05:48PM 3 points [-]

It's a test case for rationality as pure self-interest (really it's like an altruistic version of the game of Chicken).

Suppose I'm purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.

But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my promise.

I am better off overall if I can commit in advance to keeping my promise. In other words, I am better off overall if I have a disposition which sometimes causes me to behave irrationally at the margin. Under the self-interest notion of rationality, then, it is rational, at the margin of choosing your disposition, to choose a disposition which is not rational under the self-interest notion of rationality. (This is what Parfit describes as an "indirectly self-defeating" result; note that being indirectly self-defeating is not a knockdown argument against a position.)

Comment author: rwallace 03 April 2009 07:19:33PM 2 points [-]

Ah, thanks. I'm of the school of thought that says it is rational both to promise to pay the $100, and to have a policy of keeping promises.

Comment author: GuySrinivasan 03 April 2009 08:22:58PM 1 point [-]

I think it is both right and expected-utility-maximizing to promise pay the $100, right to pay the $100, and not expected-utility-maximizing to pay the $100 under standard assumptions of you'll never see the driver again or whatnot.

Comment author: thomblake 03 April 2009 08:31:48PM 1 point [-]

You're assuming it does no damage to oneself to break one's own promises. Virtue theorists would disagree.

Breaking one's promises damages one's integrity - whether you consider that a trait of character or merely a valuable fact about yourself, you will lose something by breaking your promise even if you never see the fellow again.

Comment author: grobstein 03 April 2009 08:39:51PM 1 point [-]

Your argument is equivalent to, "But what if your utility function rates keeping promises higher than a million orgasms, what then?"

The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.

Comment author: rwallace 03 April 2009 11:44:51PM 2 points [-]

Simple models are fine as long as we don't forget they are only approximations. Rationalists should win in the real world.

Comment author: thomblake 03 April 2009 08:43:00PM 2 points [-]

Except that you mention both persons and promises in the hypothetical example, so both things factor into the correct decision. If you said that it's not a person making the decision, or that there's no promising involved, then you could discount integrity.

Comment author: grobstein 03 April 2009 07:29:55PM 1 point [-]

Yes, this seems unimpeachable. The missing piece is, rational at what margin? Once you are home, it is not rational at the margin to pay the $100 you promised.

Comment author: randallsquared 03 April 2009 08:08:43PM 2 points [-]

This assumes no one can ever find out you didn't pay, as well. In general, though, it seems better to assume everything will eventually be found out by everyone. This seems like enough, by itself, to keep promises and avoid most lies.

Comment author: grobstein 03 April 2009 08:09:55PM 1 point [-]

Right. The question of course is, "better" for what purpose? Which model is better depends on what you're trying to figure out.

Comment author: ciphergoth 03 April 2009 08:02:46PM *  1 point [-]

Thank you, I too was curious.

We need names for these positions; I'd use hyper-rationalist but I think that's slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.

I guess I'm not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it's human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?

Comment author: grobstein 03 April 2009 08:05:42PM *  2 points [-]

I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.

Comment author: ciphergoth 04 April 2009 08:38:38AM 3 points [-]

OK, thanks!

Your friend stole a book on moral philosophy? That's pretty special!

Comment author: MichaelHoward 05 April 2009 02:18:47PM 3 points [-]
Comment author: gjm 03 April 2009 11:35:44PM 1 point [-]

It's still in print and readily available. If you really miss it all the time, why haven't you bought another copy?

Comment author: grobstein 03 April 2009 11:37:27PM 0 points [-]

It's $45 from Amazon. At that price, I'm going to scheme to steal it back first.

OR MAYBE IT'S BECAUSE I'M CRAAAZY AND DON'T ACT FOR REASONS!

Comment author: gjm 04 April 2009 12:35:49AM 2 points [-]

Gosh. It's only £17 in the UK.

(I wasn't meaning to suggest that you're crazy, but I did wonder about ... hmm, not sure whether there's a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)

Comment author: Nick_Tarleton 04 April 2009 06:51:48AM 2 points [-]

Mental accounting has that effect in the short run, but seems unlikely to apply here.

Comment author: grobstein 03 April 2009 08:07:48PM 1 point [-]

I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality ("rationality"), which is perhaps a somewhat dubious use. But look how much time we're spending on it.

Comment author: grobstein 03 April 2009 06:14:43PM *  1 point [-]

Why don't you accept his distinction between acting rationally at a given moment and having the disposition which it is rational to have, integrated over all time?

EDIT: er, Parfit's, that is.

Comment author: grobstein 03 April 2009 04:43:11PM 3 points [-]

This is a classic point and clearer than the related argument I'm making above. In addition to being part of the accumulated game theory learning, it's one of the types of arguments that shows up frequently in Derek Parfit's discussion of what-is-rationality, in Ch. 1 of Reasons and Persons.

I feel like there are difficulties here that EY is not attempting to tackle.

Comment author: abigailgem 03 April 2009 05:33:32PM 1 point [-]

James, when you say, "be rational", I think this shows a misunderstanding.

It may be really important to impress people with a certain kind of reckless courage. Then it is Rational to play chicken as bravely as you can. This Wins in the sense of being better than the alternative open to you.

Normally, I do not want to take the risk of being knocked down by a car. Only in this case is it not rational to play chicken: because not playing achieves what I want.

I do not see why a rationalist should be less courageous, less able to estimate distances and speeds, and so less likely to win at Chicken.

Comment author: grobstein 03 April 2009 06:33:21PM 3 points [-]

No. The point is that you actually want to survive more than you want to win, so if you are rational about Chicken you will sometimes lose (consult your model for details). Given your preferences, there will always be some distance \epsilon before the cliff where it is rational for you to give up.

Therefore, under these assumptions, the strategy "win or die trying" seemingly requires you to be irrational. However, if you can credibly commit to this strategy -- be the kind of person who will win or die trying -- you will beat a rational player every time.

This is a case where it is rational to have an irrational disposition, a disposition other than doing what is rational at every margin.

Comment author: Annoyance 03 April 2009 06:40:09PM -1 points [-]

But a person who truly cares more about winning than surviving can be utterly rational in choosing that strategy.

Comment author: Technologos 03 April 2009 07:04:16PM 1 point [-]

Agreed. In fact, the classic game-theoretic model of chicken requires that the players vastly prefer losing their pride to losing their lives. If winning/losing > losing/dying, then in a situation with imperfect information, we would assign a positive probability to playing aggressively.

And technically speaking, it is most rational, in the game-theoretic sense, to disable your steering ostentatiously before the other player does so as well. In that case, you've won the game before it begins, and there is no actual risk.

Comment author: James_Miller 03 April 2009 07:17:07PM 1 point [-]

No, if you are rational the best action is to convince your opponent that you have disabled your steering when in fact you have not done so.

Comment author: Technologos 04 April 2009 06:33:30PM 1 point [-]

Either a) your opponent truly does believe that you've disabled your steering, in which case the outcomes are identical and the actions are equally rational, or b) we account for the (small?) chance that your opponent can determine that you actually have not disabled your steering, in which case he ostentatiously disables his and wins. Only by setting up what is in effect a doomsday device can you ensure that he will not be tempted to information-gathering brinksmanship.