Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

XiXiDu comments on [SEQ RERUN] New Improved Lottery - Less Wrong Discussion

3 Post author: badger 31 May 2011 01:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 31 May 2011 05:21:34PM 3 points [-]

I'm just not getting it. Do you really think people are never just stupid?

No, people very often are just stupid. But I don't approve of the way the posts on lotteries have been written. Lotteries are not intrinsically irrational, only if one is confused about what it means to play the lottery. What Elezier is basically saying is that "you ought not to play lotteries because it is stupid." That's a pretty weak argument.

..."change the weight of 'saving beings to 'super duper important' (super duper important)". (Either that or you're a serious jerk.)

I haven't read the metaethics sequence, so maybe he has figured out some objective right that makes caring about beings more important than caring about oneself. I doubt it though, and I don't think that it has been proven that humans are not selfish.

For me winning means to do what I want, the way I want it, when I want it. And I never regret anything, because at one time it was exactly what I wanted.

Eliezer's point is that if you think you should play the lottery then you are wrong about your own values...

I don't buy the general point of being wrong about one's own values. I am not the same person as one that was smarter, knew more and had unlimited resources to think about decisions.

If I adopt game and decision theoretic models, I discard my current values and replace them with some sort of equilibrium, between me and other agents that adopted the same strategies. But I don't want to play that game, I don't care. I care about my current values, not what I would do if I was able to run models of all other agents and extrapolate their values and strategies.

If you asked a Cro-Magnon man about its goals and desires it would likely mention mating and hunting. Sure, if the Cro-Magnon was smarter and knew more, it would maybe care how to turn a sphere inside out. And if it knew even more? Where does this line of reasoning lead us?

Rationality is instrumental, not a goal in and of itself. Rationality can't tell me what to value or how to allocate utility to my goals. If I want to cooperate when faced with the Prisoner's dilemma, then that is what I want.

Comment author: [deleted] 13 May 2012 12:48:15PM 0 points [-]

If you asked a Cro-Magnon man about its goals and desires it would likely mention mating and hunting. Sure, if the Cro-Magnon was smarter and knew more, it would maybe care how to turn a sphere inside out. And if it knew even more? Where does this line of reasoning lead us?

Why “it” rather than “he”? That's confusing, for me at least.

Comment author: MixedNuts 31 May 2011 08:14:15PM 0 points [-]

What. This goes completely against what I thought was a common human experience of following the stern moral obligation of kicking out your bisexual son even though it tears your heart out, discovering to your own surprise and horror that Leviticus shouldn't dictate your values, and following the stern moral obligation of helping your son's boyfriend in a lurch even though it disgusts you and costs you your church's love. (And, yes, keeping this up even if you never derive the slightest satisfaction, if it kills you, and if you forget all about it the instant you've chosen.) Either you're mistaken, or you're very weird, or I and some disproportionately famous people are weird.