MTGandP comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (182)
It's really abstract and difficult to explain, so I probably won't do a very good job. Peter Singer explains it pretty well in "All Animals Are Equal." Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
Well yeah. That's because discovery tends to increase happiness. But if it didn't, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn't benefit anyone, so what's the point? Discovery is only worthwhile if it increases happiness in some way.
I'm not saying that it's impossible to come up with an example of something that's not reducible to happiness, but I don't think discovery is such a thing.
[1] Unless it is capable of greater suffering, but that's not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.
This sounds like a bad rule and could potentially create a sensitivity arms race. Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started? This would be counter-intuitive, to say the least.
It means that inducing some typically-harmful action on a Stoic is less harmful than inducing it on a normal person. For example, suppose you have a Stoic who no longer feels negative reactions to insults. If you insult her, she doesn't mind at all. It would be morally better to insult this person than to insult a typical person.
Let me put it this way: all suffering of equal degree is equally important, and the importance of suffering is proportional to its degree.
A lot of conclusions follow from this principle, including:
No, my point was that your valuing pain is itself a moral intuition. Picture a pebblesorter explaining that this pile is correct, while your pile is, obviously, incorrect.
So, say, an emotionless AI? A human with damaged pain receptors? An alien with entirely different neurochemistry analogs?
No. I'm saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it's better than sitting in a whitewashed room wireheading nonetheless.
Can you taboo "suffering" for me?
I've avoided using the word "suffering" or its synonyms in this comment, except in one instance where I believe it is appropriate.
Yes, it's an intuition. I can't prove that suffering is important.
If the AI does not consciously prefer any state to any other state, then it has no moral worth.
Such a human could still experience emotions, so ey would still have moral worth.
Difficult to say. If it can experience states about which it has an interest in promoting or avoiding, then it has moral worth.
Okay. I don't really get why, but I can't dispute that you hold that value. This is why preference utilitarianism can be nice.
... oh.
You were defining pain/suffering/whatever as generic disutility? That's much more reasonable.
... so, is a hive of bees one mind of many or sort of both at once? Does evolution get a vote, here? If you aren't discounting optimizers that lack consciousness you're gonna get some damn strange results with this.
Many. The unit of moral significance is the conscious mind. A group of bees is not conscious; individual bees are conscious.
(Edit: It's possible that bees are not conscious. What I meant was that if bees are conscious then they are conscious as individuals, not as a group.)
A non-conscious being cannot experience disutility, therefore it has no moral relevance.
Er... Deep Blue?
Deep Blue cannot experience disutility (i.e. negative states). Deep Blue can have a utility function to determine the state of the chess board, but that's not the same as consciously experiencing positive or negative utility.
Okay, I see what you mean by “experience”... but that makes “A non-conscious being cannot experience disutility” a tautology, so following it with “therefore” and a non-tautological claim raises all kind of warning lights in my brain.
Unless you can taboo "conscious" in such a way that that made sense, I'm gonna substitute "intelligent" for "conscious" there (which is clearly what I meant, in context.)
The point with bees is that, as a "hive mind", they act as an optimizer without any individual intention.
I don't see that you can substitute "intelligent" for "conscious". Perhaps they are correlated, but they're certainly not the same. I'm definitely more intelligent than my dog, but am I more conscious? Probably not. My dog seems to experience the world just as vividly as I do. (Knowing this for certain requires solving the hard problem of consciousness, but that's where the evidence seems to point.)
It's clear to you because you wrote it, but it wasn't clear to me.
Well yes, that's the illusion of transparency for you. I assure you, I was using conscious as a synonym for intelligent. Were you interpreting it as "able to experience qualia"? Because that is both a tad tautological and noticeably different from the argument I've been making here.
Whatever. We're getting offtopic.
If you value optimizer's goals regardless of intelligence - whether valuing a bugs desires as much as a human's, a hivemind's goals less than it's individual members or an evolution's goals anywhere - you get results that do not appear to correlate with anything you could call human morality. If I have misinterpreted your beliefs, I would like to know how. If I have interpreted them correctly, I would like to see how you reconcile this with saving orphans by tipping over the ant farm.
If ants experience qualia at all, which is highly uncertain, they probably don't experience them to the same extent that humans do. Therefore, their desires are not as important. On the issue of the moral relevance of insects, the general consensus among utilitarians seems to be that we have no idea how vividly insects can experience the world, if at all, so we are in no position to rate their moral worth; and we should invest more into research on insect qualia.
I think it's pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won't get into the question of killing because it's massively more complicated.)
I cannot say whether this is right or wrong because we don't know enough about ant qualia, but I would guess that a single human's experience is worth the experience of at least hundreds of ants, possibly a lot more.
Like what, besides the orphans-ants thing? I don't know if you've misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of "human morality" is horrendously incorrect.
This isn't obvious to me. And it is especially not obvious given that dogs are a species where one of the primary selection effects has been human sympathy.
GOSH REALLY.
Once again, you fail to provide the slightest justification for valuing dogs as much as humans; if this was "obvious" we wouldn't be arguing, would we? Dogs are intelligent enough to be worth a non-negligable amount, but if we value all pain equally you should feel the same way about, say, mice, or ... ants.
Huh? You value individual bees, yet not ants?
How, exactly, can human morality be "incorrect"? What are you comparing it to?