AllanCrossman comments on Ethics as a black box function - Less Wrong

11 Post author: Kaj_Sotala 22 September 2009 05:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 22 September 2009 06:41:08PM *  2 points [-]

For instance, we might think that maximizing total welfare is always for the best, but then realize that we don't actually want to maximize total welfare if the people we consider our friends would be hurt.

Well, you have to understand what such a decision would actually look like. In order for a decision to truly maximize total welfare over all people, even as it "stabs your friends in the back", it would have to really increase total welfare, because this utility gain would have to at least cancel out the degradation of the value of friendship.

That is, if I expect my friendship with someone not to mean that they weight me higher than a random person in their utility function, friendship becomes less valuable, and an entire set of socially-beneficial activity enabled by friendship (e.g. lower cost of monitoring for cheating) contracts.

I think your hypothetical here has the same problem that presenting the true Prisoner's Dilemma has; in the true PD, it's hard to intuitively imagine a circumstance where utilities in the payoff matrix account for my compassion for my accomplice. Just the same, in the tradeoff your presented, it's hard to intuitively understand what kind of social gain could outweigh general degradation of friendship.

ETA: Okay, it's not that hard, but like with the true PD, such situations are rare: for example, if I were presented with the choice of "My twenty closest friends/loved ones die" vs. "All of humanity except me and my twenty closest die". But even then, if e.g. my friends have children not in the set of 20, it's still not clear that all of the twenty would prefer the second option!

Comment author: AllanCrossman 22 September 2009 06:57:30PM *  1 point [-]

I'm not sure I'm understanding properly. You talk as if my action would drastically affect society's views of friendship. I doubt this is true for any action I could take.

Comment author: SilasBarta 22 September 2009 07:04:58PM *  1 point [-]

Well, all my point really requires that is that it moves society in that direction. The fraction of "total elimination of friendship" that my decision causes must be weighed against the supposed net social gain (other people's gain minus that of my friends), and it's not at all obvious when one is greater than the other.

Plus, Eliezer_Yudkowsky's Timeless Decision Theory assumes that your decisions do have implications for everyone else's decisions!