rationalnoodles comments on Open Thread for February 11 - 17 - Less Wrong

3 Post author: Coscott 11 February 2014 06:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (325)

You are viewing a single comment's thread.

Comment author: [deleted] 11 February 2014 07:21:48PM 4 points [-]

Are there any reasons for becoming utilitarian, other than to satisfy one's empathy?

Comment author: Squark 16 February 2014 08:26:51PM 0 points [-]

By utilitiarian you mean:

  1. Caring about all people equally

  2. Hedonism, i.e. caring about pleasure/pain

  3. Both of the above (=Bentham's classical utilitarianism)?

In any case, what answer do you expect? What would constitute a valid reason? What are the assumptions from which you want to derive this?

Comment author: [deleted] 17 February 2014 05:19:59PM *  0 points [-]

Both of the above (=Bentham's classical utilitarianism)

I mean this.

In any case, what answer do you expect?

I do not expect any specific answer.

What would constitute a valid reason?

For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.

What are the assumptions from which you want to derive this?

I don't know any.

Comment author: Coscott 11 February 2014 07:29:02PM 0 points [-]

You can band together lots of people to work together towards the same utilitarianism.

Comment author: [deleted] 11 February 2014 07:41:11PM 0 points [-]

i.e. change happiness-suffering to something else?

Comment author: Coscott 11 February 2014 07:47:06PM 0 points [-]

I don't know how to parse that question.

I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

Comment author: Lumifer 11 February 2014 08:04:53PM *  2 points [-]

I am claiming that people with no empathy at all can agree to work towards utilitarianism, for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

I don't understand why is this an argument in favor of utilitarianism.

A bunch of people can agree to work towards pretty much anything, for example getting rid of the unclean/heretics/untermenschen/etc.

Comment author: Coscott 11 February 2014 08:09:49PM 0 points [-]

I think you are taking this sentence out of context. I am not trying to present an argument in favor of utilitarianism. I was trying to explain why empathy is not necessary for utilitarianism.

I interpreted the question as "Why (other than my empathy) should I try to maximize other people's utility?"

Comment author: Lumifer 11 February 2014 08:24:46PM 2 points [-]

I interpreted the question as "Why (other than my empathy) should I try to maximize other people's utility?"

Right, and here is your answer:

You can band together lots of people to work together towards the same utilitarianism.

I don't understand why this is a reason "to maximize other people's utility".

Comment author: Coscott 11 February 2014 08:28:34PM 0 points [-]

You can entangle your own utility with other's utility, so that what maximizes your utility also maximizes their utility and vice versa. Your terminal value does not change to maximizing other people's utility, but it becomes a side effect.

Comment author: Lumifer 11 February 2014 08:31:36PM 2 points [-]

So you are basically saying that sometimes it is in your own self-interest ("own utility") to cooperate with other people. Sure, that's a pretty obvious observation. I still don't see how it leads to utilitarianism.

If you terminal value is still self-interest but it so happens that there is a side-effect of increasing other people's utility -- that doesn't look like utilitarianism to me.

Comment author: Coscott 11 February 2014 08:54:12PM 0 points [-]

I was only trying to make the obvious observation.

Just trying to satisfy your empathy does not really look like pure utilitarianism either.

Comment author: [deleted] 11 February 2014 08:16:02PM 0 points [-]

There's no need to parse it anymore, I didn't get your comment initially.

for the same reason they can agree to cooperate in the repeated prisoner's dilemma.

I agree theoretically, but I doubt that utilitarianism can bring more value to egoistic agent than being egoistic without regard to other humans' happiness.

Comment author: Coscott 11 February 2014 08:22:16PM 2 points [-]

I agree in the short term, but many of my long term goals (e.g. not dying) require lots of cooperation.

Comment author: Viliam_Bur 11 February 2014 08:16:18PM *  -2 points [-]

I guess the reason is maximizing one's utility function, in general. Empathy is just one component of the utility function (for those agents who feel it).

If multiple agents share the same utility function, and they know it, it should make their cooperation easier, because they only have to agree on facts and models of the world; they don't have to "fight" against each other.

Comment author: [deleted] 12 February 2014 09:17:01PM 1 point [-]

Apparently, we mean different things by "utilitarianism". I meant moral system whose terminal goal is to maximize pleasure and minimize suffering in the whole world, while you're talking about agent's utility function, which may have no regard for pleasure and suffering.

I agree, thought, that it makes sense to try to maximize one's utility function, but to me it's just egoism.

Comment author: VAuroch 11 February 2014 10:53:48PM *  0 points [-]

I am interested in this, or possibly a different closely-related thing.

I accept the logical arguments underlying utilitarianism ("This is the morally right thing to do.") but not the actionable consequences. ("Therefore, I should do this thing.") I 'protect' only my social circle, and have never seen any reason why I should extend that.

Comment author: blacktrance 11 February 2014 10:58:33PM 3 points [-]

What does "the morally right thing to do" mean if not "the thing you should do"?

Comment author: VAuroch 11 February 2014 11:02:58PM *  1 point [-]

To rephrase: I accept that utilitarianism is the correct way to extrapolate our moral intuitions into a coherent generalizable framework. I feel no 'should' about it -- no need to apply that framework to myself -- and feel no cognitive dissonance when I recognize that an action I wish to perform is immoral, if it hurts only people I don't care about.

Comment author: mwengler 12 February 2014 12:59:54AM 0 points [-]

Ultimately I think that is the way all utilitarianism works. You define an in group of people who are important, effectively equivalently important to each other and possibly equivalently important to yourself.

For most modern utilitarians, the in-group is all humans. Some modern utilitarians put mammals with relatively complex nervous systems in the group, and for the most part become vegetarians. Others put everything with a nervous system in there and for the most part become vegans. Very darn few put all life forms in there as they would starve. Implicit in this is that all life forms would place negative utility on being killed to be eaten which may be reasonable or may be projection of human values on to non-human entities.

But logically it makes as much sense to shrink the group you are utilitarian about as to expand it. Only Americans seems like a popular one in the US when discussing immigration policy. Only my friends and family has a following. Only LA Raiders fans or Manchester United fans seems to also gather its proponents.

Around here, I think you find people trying to put all thinking things, even mechanical, in the in-group, perhaps only all conscious thinking things. Maybe the way to create a friendly AI would be to make sure the AI never values its own life more than it values its own death, then we would always be able to turn it off without it fighting back.

Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won't accord the stranger the full benefit of your concern, just a partial benefit.

Comment author: VAuroch 12 February 2014 01:30:00AM 0 points [-]

Also, I suspect in reality you have a sliding scale of acceptance, that you would not be morally neutral about killing a stranger on the road and taking their money if you thought you could get away with it. But you certainly won't accord the stranger the full benefit of your concern, just a partial benefit.

Oh, there are definitely gradations. I probably wouldn't do this, even if I could get away with it. I don't care enough about strangers to go out of my way to save them, but neither do I want to kill them. On the other hand, if it was a person I had an active dislike for, I probably would. All of which is basically irrelevant, since it presupposes the incredibly unlikely "if I thought I could get away with it".

Comment author: deskglass 12 February 2014 07:18:51PM 1 point [-]

I used to think I thought that way, but then I had some opportunities to casually steal from people I didn't know (and easily get away with it), but I didn't. With that said, I pirate things all the time despite believing that doing so frequently harms the content owners a little.

Comment author: VAuroch 12 February 2014 10:16:14PM -1 points [-]

I have taken that precise action against someone who mildly annoyed me. I remember it (and the perceived slight that motivated it), but feel no guilt over it.