Wei_Dai comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 15 January 2010 07:20:24PM 6 points [-]

Robin, I don't understand why you refer to it as "dealism". The word "deal" makes it sound as if your moral philosophy is more about cooperation than altruism, but in that case why would you give any weight to the preferences of animals and people with low IQ (for example), since they have little to offer you in return?

Comment author: RobinHanson 15 January 2010 08:08:20PM 3 points [-]

Deals can be lopsided. If they have little to offer, they may get little in return.

Comment author: mattnewport 15 January 2010 09:24:53PM 3 points [-]

This seems to provide an answer to the question you posed above.

What other principle can you use to draw this line between creatures who count and those who don't?

Chickens have very little to offer me other than their tasty flesh and essentially no capacity to meaningfully threaten me which is why I don't take their preferences into account. If you're happy with lopsided deals then there's how you draw the line.

This seems like a perfectly reasonable position to take but it doesn't sound anything like utilitarianism to me.

Comment author: RobinHanson 15 January 2010 10:32:27PM 0 points [-]

Turns out, the best deals look a lot like maximizing weighted averages of the utilities of affected parties.

Comment author: mattnewport 15 January 2010 10:58:28PM *  6 points [-]

Well the weighting is really the crux of the issue. If you are proposing that weighting should reflect both what the affected parties can offer and what they can credibly threaten then I still don't think this sounds much like utilitarianism as usually defined. It sounds more like realpolitik / might-is-right.

Comment author: Wei_Dai 15 January 2010 11:55:03PM 3 points [-]

Turns out, the best deals look a lot like maximizing weighted averages of the utilities of affected parties.

I disagree. Certainly there are examples where the best deals do not look like maximizing weighted averages of the utilities of affected parties, and I gave one here. Are you aware of some argument that these kinds of situations are not likely in real life?

I also agree with mattnewport's point, BTW.

Comment author: Wei_Dai 15 January 2010 08:54:23PM 1 point [-]

Ok, I didn't realize that you would weigh others' preferences by how much they can offer you. My followup question is, you seem willing to give weight to other people's preferences unilaterally, without requiring that they do the same for you, which is again more like altruism than cooperation. (For example you don't want to ignore animals, but they can't really reciprocate your attempt at cooperation.) Is that also a misunderstanding on my part?

Comment author: RobinHanson 15 January 2010 09:21:03PM 1 point [-]

Creatures get weight in a deal both because they have things to offer, and because others who have things to offer care about them.

Comment author: denisbider 25 January 2010 08:40:07PM 0 points [-]

But post-FAI, how does anyone except the FAI have anything to offer? Neither anything to offer, nor anything to threaten with. The FAI decides all, does all, rules all. The question is, how should it rule? Since no creature besides the FAI has anything to offer, weighting is out of the equation, and every present, past, and potential creature's utilities should count the same.

Comment author: Wei_Dai 25 January 2010 08:56:56PM 1 point [-]

I think an FAI's values would reflect the programmers' values (unless it turns out there is Objective Morality or something else unexpected). My understanding now is that if Robin were the FAI's programmer, the weights he would give to other people in its utility function would depend on how much they helped him create the FAI (and for people who didn't help, how much the helpers care about them).

Comment author: denisbider 25 January 2010 09:04:45PM *  1 point [-]

Sounds plenty selfish to me. Indeed, no different than might-is-right.

Comment author: Wei_Dai 27 January 2010 01:11:46AM 3 points [-]

Sounds plenty selfish to me. Indeed, no different than might-is-right.

Instead of might-is-right, I'd summarize it as "might-and-the-ability-to-provide-services-to-others-in-exchange-for-what-you-want-is-right" and Robin would presumably emphasize the second part of that.

Comment author: Vladimir_Nesov 26 January 2010 10:17:36PM *  3 points [-]

You can care a lot about other people no matter how much they help you, but should help those who helps you even more for game-theoretic reasons. This doesn't at all imply "selfishness".