Nominull comments on So you say you're an altruist... - Less Wrong

11 Post author: John_Maxwell_IV 12 March 2009 10:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (106)

You are viewing a single comment's thread.

Comment author: Nominull 13 March 2009 01:48:26AM 10 points [-]

The thing is, I could just as easily be one of the ten as the eleventh (actually, ten times as easily), so it's in my interests to support a norm where the eleventh sacrifices for the good of the ten. I am in very little danger of starving to death in Africa.

It's not pleasant, but it is true.

Comment author: Lawliet 13 March 2009 03:03:52AM 13 points [-]

Teach everyone else to cooperate then defect

Comment author: Eliezer_Yudkowsky 13 March 2009 05:23:00AM 11 points [-]

Congratulations, you've written the most horrifying sentence I've read all day.

Comment author: Lawliet 13 March 2009 05:55:23AM 5 points [-]

Congratulations, you've written the most horrifying sentence I've read all day.

Tricking the other player is never justified? Did I miss something?

Comment author: [deleted] 13 March 2009 01:37:53PM 9 points [-]

This site is supposed to be about rationality, but it's covertly about altruism.

Comment author: Eliezer_Yudkowsky 13 March 2009 05:31:30PM 11 points [-]
Comment author: John_Maxwell_IV 13 March 2009 05:22:18PM 6 points [-]

Which is just the opposite of what you'd expect--If I recall correctly, students who took game-theory-oriented economics classes became less altruistic, not more.

Comment author: Larks 09 January 2012 09:58:23AM 0 points [-]

Possibly not the case - the studies you're probably thinking of used charities that did things like lobby for lower tuition and so on - exactly the sort of things you'd expect altruistic economists to oppose.

See for example Steven Landsburg on the subject

Comment author: Vladimir_Nesov 14 March 2009 11:23:02PM *  2 points [-]

You also have to deceive them into believing that you, personally, won't defect. For humans, who almost never really face one-off decision problems, your strategy isn't supposed to work both because the other people shouldn't cooperate for high stakes without having a way of getting the strong knowledge that the opponent will cooperate given that they cooperate (some kind of publicly announced externally controlled commitment), and because you have too few shots at defecting before you get bad reputation.