MugaSofer comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (182)
No, I mean I am unsure as to what my CEV would answer.
Because I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.
... both?
Fair enough. Unfortunately, the area of ethics where I'm the most uncertain is weighting creatures with different intelligence levels.
Thing like discovery and creativity seem like good examples of preferences that don't reduce to happiness IIRC, although it's been a while since I thought everything reduced to happiness so I don't recall very well.
Not sure what this means.
But why is intelligence important? I don't see its connection to morality. I know it's commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.
If intelligence is morally significant, then it's not really that bad to torture a mentally handicapped person.
I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.
So then what is your good reason that's not directly based on intuition?
Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn't make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.
You mentioned CEV in your previous comment, so I assume you're familiar with it. I mean that I think if you took people's coherent extrapolated volitions, they would exclusively value happiness.
Well, why is pain important? I suspect empathy is mixed up here somewhere, but honestly, it doesn't feel like it reduces - bugs just are worth less. Besides, where do you draw the line if you lack a sliding scale - I assume you don't care about rocks, or sponges, or germs.
Well ... not as bad as torturing, say, Bob, the Entirely Average Person, no. But it's risky to distinguish between humans like this because it lets in all sorts of nasty biases, so I try not to except in exceptional cases.
I know you do. Of course, unless they're really handicapped, most animals are still much lower; and, of course there's the worry that the intelligence is ther and they just can't express it in everyday life (idiot savants and so on.)
Well, it's morality, it does ultimately come down to intuition no matter what. I can come up with all sorts of reasons, but remember that they aren't my true rejection - my true rejection is the mental image of killing a man to save some cockroaches.
And yet, a world without them sounds bleak and lacking in utility.
Oh, right.
Ah ... not sure what I can say to convince you if NFTSOH(A) didn't.
It's really abstract and difficult to explain, so I probably won't do a very good job. Peter Singer explains it pretty well in "All Animals Are Equal." Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
Well yeah. That's because discovery tends to increase happiness. But if it didn't, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn't benefit anyone, so what's the point? Discovery is only worthwhile if it increases happiness in some way.
I'm not saying that it's impossible to come up with an example of something that's not reducible to happiness, but I don't think discovery is such a thing.
[1] Unless it is capable of greater suffering, but that's not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.
This sounds like a bad rule and could potentially create a sensitivity arms race. Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started? This would be counter-intuitive, to say the least.
It means that inducing some typically-harmful action on a Stoic is less harmful than inducing it on a normal person. For example, suppose you have a Stoic who no longer feels negative reactions to insults. If you insult her, she doesn't mind at all. It would be morally better to insult this person than to insult a typical person.
Let me put it this way: all suffering of equal degree is equally important, and the importance of suffering is proportional to its degree.
A lot of conclusions follow from this principle, including:
No, my point was that your valuing pain is itself a moral intuition. Picture a pebblesorter explaining that this pile is correct, while your pile is, obviously, incorrect.
So, say, an emotionless AI? A human with damaged pain receptors? An alien with entirely different neurochemistry analogs?
No. I'm saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it's better than sitting in a whitewashed room wireheading nonetheless.
Can you taboo "suffering" for me?
I've avoided using the word "suffering" or its synonyms in this comment, except in one instance where I believe it is appropriate.
Yes, it's an intuition. I can't prove that suffering is important.
If the AI does not consciously prefer any state to any other state, then it has no moral worth.
Such a human could still experience emotions, so ey would still have moral worth.
Difficult to say. If it can experience states about which it has an interest in promoting or avoiding, then it has moral worth.
Okay. I don't really get why, but I can't dispute that you hold that value. This is why preference utilitarianism can be nice.
... oh.
You were defining pain/suffering/whatever as generic disutility? That's much more reasonable.
... so, is a hive of bees one mind of many or sort of both at once? Does evolution get a vote, here? If you aren't discounting optimizers that lack consciousness you're gonna get some damn strange results with this.
Many. The unit of moral significance is the conscious mind. A group of bees is not conscious; individual bees are conscious.
(Edit: It's possible that bees are not conscious. What I meant was that if bees are conscious then they are conscious as individuals, not as a group.)
A non-conscious being cannot experience disutility, therefore it has no moral relevance.
Er... Deep Blue?
Deep Blue cannot experience disutility (i.e. negative states). Deep Blue can have a utility function to determine the state of the chess board, but that's not the same as consciously experiencing positive or negative utility.
Unless you can taboo "conscious" in such a way that that made sense, I'm gonna substitute "intelligent" for "conscious" there (which is clearly what I meant, in context.)
The point with bees is that, as a "hive mind", they act as an optimizer without any individual intention.
I don't see that you can substitute "intelligent" for "conscious". Perhaps they are correlated, but they're certainly not the same. I'm definitely more intelligent than my dog, but am I more conscious? Probably not. My dog seems to experience the world just as vividly as I do. (Knowing this for certain requires solving the hard problem of consciousness, but that's where the evidence seems to point.)
It's clear to you because you wrote it, but it wasn't clear to me.