SaidAchmiz comments on Why Eat Less Meat? - LessWrong

48 Post author: peter_hurford 23 July 2013 09:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lukas_Gloor 24 July 2013 03:31:08PM *  2 points [-]

The morality you suggest is what Derek Parfit calls collectively self-defeating. This means that if everyone were to follow it perfectly, there could be empirical situations where your actual goals, namely the well-being of those closest to you, are achieved less well than they would be if everyone followed a different moral view. So there could be situations in which people have more influence on the well-being of the family of strangers, and if they'd all favor their own relatives, everyone would end up worse off, despite everyone acting perfectly moral. Personally I want a world where everyone acts perfectly moral to be as close to Paradise as is empirically possible, but whether this is something you are concerned about is a different question (that depends on what question your seeking to answer by coming up with a coherent moral view).

Comment author: SaidAchmiz 25 July 2013 02:30:41AM 2 points [-]

This seems nonsensical; a utility function does not prescribe actions. If I care about my family most, but acting in a certain way will cause them to be worse off, then I won't act that way. In other words, if everyone acting perfectly moral causes everyone to end up worse off, then by definition, at least some people were not acting perfectly moral.

Comment author: Lukas_Gloor 25 July 2013 02:53:30AM *  1 point [-]

The problem is not with your actions, but with the actions of all the others (who are following the same general kind of utility function but because your utility function is agent-relative, they use different variables, i.e. they care primarily about their family and friend as opposed to yours). However, I was in fact wondering whether this problem disappears if we make the agents timeless (or whatever does the job), so they would cooperate with each other to avoid the suboptimal outcome. This is seems fair enough since acting "perfectly moral" seems to imply the best decision theory.

Does this solve the problem? I think not; we could tweak the thought experiment further to account for it: we could imagine that due to empirical circumstances, such cooperation is prohibited. Let's assume that the agents lack the knowledge that the other agents are timeless. Is this an unfair addendum to the scenario? I don't see why, because given the empirical situation (which seems perfectly logically possible) the agents find themselves in, the moral algorithm they collectively follow may still lead to results that are suboptimal for everyone concerned.

Comment author: SaidAchmiz 25 July 2013 02:58:51AM 2 points [-]

You don't follow a utility function. Utility functions don't prescribe actions.

... are you suggesting that we solve prisoner's dilemmas and similar problems by modifying our utility function?

Comment author: Lukas_Gloor 25 July 2013 03:08:28AM *  -1 points [-]

OK, bad choice of words.

No, but you need some decision theory to go with your utility function, and I was considering the possibility that Parfit merely pointed out a flaw of CDT and not a flaw of common sense morality. However, given that we can still think of situations where common sense morality (no matter the decision theory) executed by everyone does predictably worse for everyone concerned than some other theory, Parfit's objection still stands.

(Incidentally, I suspect that there could be situations where modifying your utility function is a way to solve a prisoner's dilemma, but that wasn't what I meant here.)