William comments on Rationality is Systematized Winning - Less Wrong

48 Post author: Eliezer_Yudkowsky 03 April 2009 02:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (252)

You are viewing a single comment's thread. Show more comments above.

Comment author: grobstein 03 April 2009 11:26:07PM -1 points [-]

Eliezer's argument, if I understand it, is that any decision-making algorithm that results in two-boxing is by definition irrational due to giving a predictably bad outcome.

So he's assuming the conclusion that you get a bad outcome? Golly.

Comment author: William 03 April 2009 11:31:02PM 1 point [-]

The result of two-boxing is a thousand dollars. The result of one-boxing is a million dollars. By definition, a mind that always one-boxes receives a better payout than one that always two-boxes, and therefore one-boxing is more rational, by definition.

Comment author: Furcas 03 April 2009 11:41:32PM *  1 point [-]

The result of two-boxing is a thousand dollars more than you would have gotten otherwise. The result of one-boxing is a thousand dollars less than you would have gotten otherwise. Therefore two-boxing is more rational, by definition.

What determines whether you'll be in a 1M/1M+1K situation or in a 0/1K situation is the kind of mind you have, but in Newcomb's problem you're not given the opportunity to affect what kind of mind you have (by pre-commiting to one-boxing, for example), you can only decide whether to get X or X+1K, regardless of X's value.

Comment author: GuySrinivasan 04 April 2009 12:48:42AM 2 points [-]

Suppose for a moment that one-boxing is the Foo thing to do. Two-boxing is the expected-utility-maximizing thing to do. Omega decided to try to reward those minds which it predicts will choose to do the Foo thing with a decision between doing the Foo thing and gaining $1000000, and doing the unFoo thing and gaining $1001000, while giving those minds which will choose to do the unFoo thing a decision between doing the Foo thing and gaining $0 and doing the unFoo thing and gaining $1000.

The relevant question is whether there is a generalization of the computation Foo which we can implement that doesn't screw us over on all sorts of non-Newcomb problems. Drescher for instance claims that acting ethically implies, among other things, doing the Foo thing, even when it is obviously not the expected-utility-maximizing thing.

Comment author: orthonormal 04 April 2009 01:36:58AM 1 point [-]

See Arguing "By Definition". It's particularly problematic when the definition of "rational" is precisely what's in dispute.