cata comments on Stupid Questions Thread - January 2014 - Less Wrong

10 Post author: RomeoStevens 13 January 2014 02:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (293)

You are viewing a single comment's thread. Show more comments above.

Comment author: cata 13 January 2014 04:31:17AM *  1 point [-]

Sure, Eve did a good thing.

Comment author: solipsist 13 January 2014 04:26:05PM 2 points [-]

Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?

Comment author: cata 13 January 2014 08:40:23PM *  0 points [-]

Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it's easy to see and measure the good you're doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn't detract from others' was a complicated conjunction of things that worked out well.

Comment author: Calvin 13 January 2014 05:01:29AM *  1 point [-]

I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.

Comment author: RowanE 13 January 2014 11:13:36AM 0 points [-]

It's specified that he was killed painlessly.

Comment author: Calvin 13 January 2014 11:24:04AM *  1 point [-]

It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.

He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.

If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.

Comment author: Chrysophylax 13 January 2014 03:04:03PM 1 point [-]

We live in a world full of utility monsters. We call them humans.

Comment author: Calvin 13 January 2014 04:25:37PM 3 points [-]

I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name?

We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.

Comment author: Chrysophylax 13 January 2014 08:47:48PM 0 points [-]

No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn't exist otherwise, but the same cannot be said for battery animals.

Comment author: Gunnar_Zarncke 15 January 2014 12:12:15PM *  1 point [-]

But driving this reasoning to its logical conclusion you get a lot of strange results.

The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.

Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don't know will always choose defect and thus reap a higher benefit than you - except if they are too many.

So either

  • You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.

  • Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.

The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its 'logical conclusion' out of context and without regard to that this concept is an evolved feature itself.

As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.

In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment - only we have already learned that the latter might be no good idea).

The conclusion is that you also have to balance extreme empathy with reality.

ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/

Comment author: Chrysophylax 15 January 2014 05:02:10PM -1 points [-]

Robert Nozick:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.

My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don't identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.

Comment author: Gunnar_Zarncke 15 January 2014 05:15:43PM 0 points [-]

That way it looks. And this is probably part of being human.

I'd like to rephrase your answer as follows to drive home that ethics is most driven by empathy:

Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.

Comment author: Calvin 13 January 2014 09:22:46PM 0 points [-]

In this case, I concur that your argument may be true if you include animals in your utility calculations.

While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.

Comment author: solipsist 13 January 2014 04:32:21PM -1 points [-]

I didn't mean for the hermit to be sad, just less happy than the child.

Comment author: Calvin 13 January 2014 04:40:00PM 0 points [-]

Ah, must have misread your representation, but English is not my first language, so sorry about that.

I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.