PeterisP comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong

44 Post author: wdmacaskill 15 November 2012 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (182)

You are viewing a single comment's thread. Show more comments above.

Comment author: PeterisP 26 November 2012 11:33:35PM 0 points [-]

What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.

So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma - if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I'd favor the man in all cases without much hesitation.

So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes - one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I've got no idea. I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Comment author: MTGandP 27 November 2012 01:08:47AM 1 point [-]

I wouldn't try to estimate the value of a particular species' suffering by intuition. Intuition is, in a lot of situations, a pretty bad moral compass. Instead, I would start from the simple assumption that if two beings suffer equally, their suffering is equally significant. I don't know how to back up this claim other than this: if two beings experience some unpleasant feeling in exactly the same way, it is unfair to say that one of their experiences carries more moral weight than the other.

Then all we have to do is determine how much different beings suffer. We can't know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain. Hence, the physical pain that a chicken feels is roughly comparable to the pain that a human feels. It should be possible to use neuroscience to provide a more precise comparison, but I don't know enough about that to say more.

Top animal-welfare charities such as The Humane League probably prevent about 100 days of suffering per dollar. The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.

As a side note, you mentioned comparing the value of a cow versus a human. I don't think this is a very useful comparison to make. A better comparison is the suffering of a cow versus a human. A life's value depends on how much happiness and suffering it contains.

Comment author: MugaSofer 27 November 2012 01:23:01AM *  1 point [-]

A life's value depends on how much happiness and suffering it contains.

I personally treat lives as valuable in and of themselves. It's why I don't kill sad people, I try to make them happier.

The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

EDIT:

Then all we have to do is determine how much different beings suffer. We can't know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain.

Do you also support tiling the universe with orgasmium? Genuinely curious.

Comment author: MTGandP 27 November 2012 03:16:59AM 1 point [-]

I personally treat lives as valuable in and of themselves.

Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?

It's why I don't kill sad people, I try to make them happier.

Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

I don't care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that's where the evidence seems to point. What most people think about it does not come into the equation.

Do you also support tiling the universe with orgasmium?

Probably. I'm reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it's a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world's best thinkers to decide if it's worth doing. But right now I'm inclined to say that it is.

[1] Here I'm talking about animals like pigs and chickens, not animals like sea sponges.

Comment author: MugaSofer 27 November 2012 03:35:25AM *  0 points [-]

I personally treat lives as valuable in and of themselves.

Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?

I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

It's why I don't kill sad people, I try to make them happier.

Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.

Oh, yes. Nevertheless, even if it would increase net happiness, I don't kill people. Not for the sake of happiness alone and all that.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

I don't care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that's where the evidence seems to point. What most people think about it does not come into the equation.

The same way, sure. But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Do you also support tiling the universe with orgasmium?

Probably. I'm reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it's a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world's best thinkers to decide if it's worth doing. But right now I'm inclined to say that it is.

Have you read "Not for the Sake of Happiness (Alone)"? Human values are complicated.

Comment author: MTGandP 27 November 2012 04:56:13AM 1 point [-]

I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

  1. I was asking questions to try to better understand where you're coming from. Do you mean the questions were confusing?

  2. Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Why not? Do you have a good reason, or are you just going off of intuition?

Have you read "Not for the Sake of Happiness (Alone)"?

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

I'm inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I'm fairly uncertain about this and I don't have much evidence.

Comment author: nshepperd 27 November 2012 06:22:38AM 3 points [-]

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

You can probably think of a happiness-based justification for any value someone throws at you. But that's probably only because you're coming from the privileged position of being a human who already knows those values are good, and hence wants to find a reason happiness justifies them. I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.

Comment author: MTGandP 28 November 2012 06:40:02AM 1 point [-]

It's difficult for me to say because this sort of introspection is difficult, but I believe that I generally reject values when I find that they don't promote happiness.

You can probably think of a happiness-based justification for any value someone throws at you.

But some justifications are legitimate and some are rationalizations. With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot. It's not like I came up with some ad hoc justification for why they maybe provide a little bit of happiness. It's like discovery is responsible for almost all of the increases in quality of life that have taken place over the past several thousand years.

I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.

I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don't.

Comment author: nshepperd 28 November 2012 08:35:34AM *  3 points [-]

With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot.

The point is that's not sufficient. Like saying "all good is complexity, because for example a mother's love for her child is really complex". Yes, it's complex compared to some boring things like carving identical chair legs out of wood over and over for eternity, but compared to, say, tiling the universe with the digits of chaitin's omega or something, it's nothing. And tiling the universe with chaitin's omega would be a very boring and stupid thing to do.

You need to show that the value in question is the best way of generating happiness. Not just that it results in more than the status quo. It has to generate more happiness, than, say, putting everyone on heroine forever. Because otherwise someone who really cared about happiness would just do that.

I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don't.

And they other point is that values aren't supposed to do a job. They're meant to describe what job you would like done! If you care about something that doesn't increase happiness, then self-modifying to lose that so as to make more happiness would be a mistake.

Comment author: MTGandP 29 November 2012 12:05:04AM *  0 points [-]

You need to show that the value in question is the best way of generating happiness.

You're absolutely correct. Discovery may not always be the best way of generating happiness; and if it's not, you should do something else.

And the other point is that values aren't supposed to do a job.

Not all values are terminal values. Some people value coffee because it wakes them up; they don't value coffee in itself. If they discover that coffee in fact doesn't wake them up, they should stop valuing coffee.

With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot.

The point is that's not sufficient.

What is sufficient is demonstrating that if discovery does not promote happiness then it is not valuable. As I explained in my sorting sand example, discovery that does not in any way promote happiness is not worthwhile.

Comment author: MugaSofer 27 November 2012 06:45:32AM 0 points [-]

Well, orgasmium, for a start.

Comment author: MugaSofer 27 November 2012 05:33:13AM 0 points [-]

must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

I was asking questions to try to better understand where you're coming from. Do you mean the questions were confusing?

No, I mean I am unsure as to what my CEV would answer.

Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

Because I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Why not? Do you have a good reason, or are you just going off of intuition?

... both?

Have you read "Not for the Sake of Happiness (Alone)"?

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

Fair enough. Unfortunately, the area of ethics where I'm the most uncertain is weighting creatures with different intelligence levels.

Thing like discovery and creativity seem like good examples of preferences that don't reduce to happiness IIRC, although it's been a while since I thought everything reduced to happiness so I don't recall very well.

I'm inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I'm fairly uncertain about this and I don't have much evidence.

Not sure what this means.

Comment author: MTGandP 27 November 2012 05:59:20AM 1 point [-]

Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

Because I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But why is intelligence important? I don't see its connection to morality. I know it's commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.

If intelligence is morally significant, then it's not really that bad to torture a mentally handicapped person.

I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.

... both?

So then what is your good reason that's not directly based on intuition?

Thing like discovery and creativity seem like good examples of preferences that don't reduce to happiness IIRC, although it's been a while since I thought everything reduced to happiness so I don't recall very well.

Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn't make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.

Not sure what this means.

You mentioned CEV in your previous comment, so I assume you're familiar with it. I mean that I think if you took people's coherent extrapolated volitions, they would exclusively value happiness.

Comment author: MugaSofer 27 November 2012 06:43:35AM 0 points [-]

I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But why is intelligence important? I don't see its connection to morality. I know it's commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.

Well, why is pain important? I suspect empathy is mixed up here somewhere, but honestly, it doesn't feel like it reduces - bugs just are worth less. Besides, where do you draw the line if you lack a sliding scale - I assume you don't care about rocks, or sponges, or germs.

If intelligence is morally significant, then it's not really that bad to torture a mentally handicapped person.

Well ... not as bad as torturing, say, Bob, the Entirely Average Person, no. But it's risky to distinguish between humans like this because it lets in all sorts of nasty biases, so I try not to except in exceptional cases.

I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.

I know you do. Of course, unless they're really handicapped, most animals are still much lower; and, of course there's the worry that the intelligence is ther and they just can't express it in everyday life (idiot savants and so on.)

So then what is your good reason that's not directly based on intuition?

Well, it's morality, it does ultimately come down to intuition no matter what. I can come up with all sorts of reasons, but remember that they aren't my true rejection - my true rejection is the mental image of killing a man to save some cockroaches.

Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn't make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.

And yet, a world without them sounds bleak and lacking in utility.

You mentioned CEV in your previous comment, so I assume you're familiar with it. I mean that I think if you took people's coherent extrapolated volitions, they would exclusively value happiness

Oh, right.

Ah ... not sure what I can say to convince you if NFTSOH(A) didn't.

Comment author: MTGandP 28 November 2012 06:31:26AM 1 point [-]

Well, why is pain important?

It's really abstract and difficult to explain, so I probably won't do a very good job. Peter Singer explains it pretty well in "All Animals Are Equal." Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.

Besides, where do you draw the line if you lack a sliding scale - I assume you don't care about rocks, or sponges, or germs.

There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.

And yet, a world without [discovery] sounds bleak and lacking in utility.

Well yeah. That's because discovery tends to increase happiness. But if it didn't, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn't benefit anyone, so what's the point? Discovery is only worthwhile if it increases happiness in some way.

I'm not saying that it's impossible to come up with an example of something that's not reducible to happiness, but I don't think discovery is such a thing.

[1] Unless it is capable of greater suffering, but that's not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.

Comment author: MugaSofer 27 November 2012 12:32:22AM 0 points [-]

I would value the suffering of my child as more important than the suffering of your child. And vice versa.

To be clear, you are arguing that this is a bias to be overcome, yes?

I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Scope insensitivity?

Comment author: PeterisP 27 November 2012 12:11:05PM *  0 points [-]

No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.

I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.

"general value of entity_X's suffering" is a different, not identical measurement - but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don't want to use the general values, but the values as they apply to me.

Comment author: MugaSofer 27 November 2012 05:25:41PM 0 points [-]

... oh.

That seems ... kind of evil, to be honest.

Comment author: PeterisP 27 November 2012 08:51:10PM 0 points [-]

OK, then I feel confused.

Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?

Comment author: MugaSofer 27 November 2012 09:00:48PM 0 points [-]

Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Comment author: Kawoomba 03 December 2012 07:11:28AM 0 points [-]

if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Not true as a general statement, not if you're maximizing your expected utility gain.

Also, "if"? One often attaches utility based on ... attachment. Do you think there's more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents "evil" in that regard?

Comment author: MugaSofer 04 December 2012 01:23:57PM 0 points [-]

Are most all parents "evil" in that regard?

I believe the technical term is "biased".

Comment author: Kawoomba 04 December 2012 02:22:51PM 0 points [-]

In the same way that I'm "biased" towards yogurt-flavored ice-cream. You can call any preference you have a "bias", but since we're here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.

What's your basis for objecting against utility functions that are "biased" (you introduced the term "evil") in the sense of favoring your own children over random other children?

Comment author: MugaSofer 04 December 2012 02:34:49PM -2 points [-]

No, I'm claiming that parents don't actually have a special case in their utility function, they're just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.

Comment author: PeterisP 27 November 2012 09:31:38PM *  0 points [-]

Another situation that has some parallels and may be relevant to the discussion.

Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.

It may be caused by the fact that I'm partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I'd guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.

Comment author: PeterisP 27 November 2012 09:18:56PM *  0 points [-]

That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".

But my point is that:

a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".

b) As far as I understand, homo sapiens do generally actually have such an attitude - evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.

c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I'd be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids - even if I wouldn't love them, I'd still have that duty.

Comment author: MugaSofer 29 November 2012 10:18:23PM -1 points [-]

My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.