roystgnr comments on Another question about utilitarianism and selfishness - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (22)
Right.
Not necessarily right. And fortunately not: "change your utility function" is typically contra-utility for your existing utility function, and it would be hard to convince others to behave morally if your thesis always entailed "you should do things that will make the world worse by your own current preferences".
Utilitarian preferences with aggregated utility functions can result from negotiation, not just from remodeling your brain. In situations working with this model, your utility function doesn't change, your partners' utility functions don't change, but you all find that each of those utility functions will end up better-satisfied if you all try to optimize some weighted combination of them, because the cost of identifying and punishing defectors is still less than the cost of allowing defectors. Presumably you and your partners already assign some terminal value to others, but the negotiation process doesn't have to increase that terminal value, just add instrumental value.