Historically, allowing your ethical system to arbitrarily promote the interests of those similar to you has led to very bad results.
No, historically speaking it is how the human species survived and is still a core principle around which humans organize.
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time, it is just that when this leads to hurting others it is far more attention grabbing than say the feeling of solidarity that lets the Swedes run their cosy welfare state or the solidarity of a tribe somewhere in the New Guinea sharing their food so other tribe members don't starve.
I am pretty sure that if you removed the urge to help those more similar to oneself from humans right now by pressing a reality modification button(TM), the actual number of people helping each other and being helped would be reduced drastically.
I'm still considering the main point of your article, but one paragraph got me thinking about something.
If I'm happy to arbitrarily weight non-human animals lower, just because I don't like the implications of considering their interests equal, I would have been free to do the same when considering how much the experiences of out-group persons should matter. When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Could it be that slavery was wrong, not because the ethical...
I think this is just a cover for what we're really tempted to believe: humans count for more than non-humans, not because of the character of our minds, but simply because of the species we belong to.
I pretty much fine with a measure of speciesism. I don't at all mind explicitly valuing human minds over human minds just based on them being human (though I don't think I care that much about substrate so ems are human to me).
I don't think I'm alone.
Well, take the alien hypothetical. We make contact with this alien race, and somehow they have almost the same values as us. They too have a sense of fun, and aesthetics, and they care about the interests of others. Are their interests still worth less than a human interests? And do we have any right to object if they feel that our interests are worth less than their own?
I can't take seriously an ethical system that says, "Humans are more morally considerable, simply because I am human". I need my ethical system to be blind to the facts of who I am. I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
I feel like there's a more fundamental objection here that I'm missing.
They are bad when they on net hurt people? What kind of an answer where you expecting?
Do the experiences of merely sentient minds receive a weight of 0, 1, or somewhere in between?
Why should all sentiment self-aware minds or persons have a weight of 1 in personal moral calculation? If two people have to die, me or a random human, I think I'll pick me every time. If I have to pick between me and a really awesome person I think I would consider it might be better for me to die. Now you might say "aha you only picked the more awesome person because that person is giving more awesomeness to other people!", but this dosen't really a...
I don't think self-awareness and sentience are the only dimensions along which minds can differ. The kinds of goals a mind tries to attain are much more relevant. I wouldn't want to ensure the survival of a mind that would make it more difficult for me to carry out my own goals. For example, let's say a self-aware and sentient paperclip maximizer were to be built. Can killing it be said to be unethical?
I think the minds of most non-human animals (with maybe the exception of some species of hominids) and human sociopaths are so different from ours that treating them unequally is justified in many situations.
I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
This dosen't constrain algorithm space as much as you may think. There are plenty of algorithms that would get the wrong answer to 19th century inputs, the right answer for 12th century inputs and the right answer for 20th century inputs. Also lets remember what 12th and 19th century inputs are, these are basically partially reconstructed (by interested parties even!) incomplete input sets. When evaluating 21st century inputs and which algo...
If ethics is simply the logical deductions from a set of axioms that you think you feel (or you feel you think, I find this confusing), then is ethics really any different from aesthetics? Could we have a similar post and discussion about the optimal policy of the metropolitan museum of art in admitting and excluding various works of art? On whether it is better to paint a room green or blue?
I'm as happy as the next person to feel righteous anger at someone who "wrongs" me in certain ways, and to kill the person and feel good about it. But my opinon that my actions and desires are the result of natural selection keeps me from thinking they have some status higher than aesthetics does.
The reason for valuing all humans as opposed to what you call "persons", its much easier to tell if something is a human, then to tell if something is a "person". And in any case valuing all humans makes a much better Schelling point.
I don't like the idea of torturing puppies, but that alone doesn't really tell me whether it's wrong to do so
Because a rabbit doesn't understand its continued existence, it's not wrong to kill it suddenly and painlessly, out of sight/smell/earshot of other rabbits.
That doesn't follow. I understand it's continued existence.
Great post, though. Well written, accurate and so on.
In my ethical system farm animals are food. One should provide them with proper care and minimize their suffering, but that's as far as it goes. (Also, a happy organic free-range chicken tastes better than a pen-confined one.) Hopefully some day we will be able to grow tasty brainless meat in vats, just like we grow crops in the field, and the whole issue of ethics-based vegetarianism will be moot.
I can imagine a society where (some) humans are raised for food. In that case I would apply the same ethics: minimize suffering and work toward replacing them with a less controversial food source.
When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Even assuming that "19th century inputs" contained enough misinformation that this distinction makes sense, are you saying this independence of inputs should be a general principle? This actually seems sorta bad. A set of ethics that would give the same answer if I was raised in 1500 as it would if I was raised in 2000 also seems likely to give the same answer if I'm raised in 2500.
...If you have to choose
Is there any relevant research on the subject of animal sentience, animal "persons", etc?
I've read quite a few arguments from different points of view, but haven't found any actual science on the subject.
There's been some discussion on this site about vegetarianism previously, although less than I expected. It's a complicated topic, so I want to focus on a critical sub-issue: within a consequentialist/utilitarian framework, what should be the status of non-human animals? Do only humans matter? If non-human animals matter only a little, just how much do they matter?
I argue that species-specific weighting factors have no place in our moral calculus. If two minds experience the same sort of stimulus, the species of those minds shouldn't affect how good or bad we believe that to be. I owe the line of argument I'll be sketching to Peter Singer's work. His book Practical Ethics is the best statement of the case that I'm aware of.
Front-loaded definitions and summary:
Personhood and Sentience
Cognitively healthy mature humans have minds that differ in many ways from the other species on Earth. The most striking is probably the level of abstraction we are able to think at. A related ability is that we are able to form detailed plans far into the future. We also have a sense of self that persists through time.
Let's call a mind that is fully self-aware a person. Now, whether or not there are any non-human persons on Earth today, non-human persons are certainly possible. They might include aliens, artificial intelligences, or extinct ancestral species. There are also humans that are not persons; due to brain damage, birth defects, or perhaps simply infancy[1]. Minds that are not self-aware in this way, but are able to have subjective experiences, let's call sentient.
Consequentialism/Utilitarianism
This is an abridged summary of consequentialism/utilitarianism, included for completeness. It's designed to tell you what I'm on about if you've never heard of this before. For a full argument in support of this framework, see elsewhere.
A consequentialist ethical framework is one in which the ethical status of an action is judged by the "goodness" of the possible worlds it creates, weighted by the probability of those outcomes[2]. Nailing down a "goodness function" (usually called a utility function) that returns an answer [0,1] for the desirability of a possible world is understandably difficult. But the parts that are most difficult also seldom matter. The basics are easy to agree upon. Many of our subjective experiences are either sharply good or sharply bad. Roughly, a world in which minds experience lots of good things and few bad things should be preferable to a world in which minds have lots of negative experiences and few positive experiences.
In particular, it's obvious that pain is bad, all else being equal. A little pain can be a worthwhile price for good experiences later, but it's considered a price precisely because we'd prefer not to pay it. It's a negative on the ledger. So, an action which reduces the amount of pain in the world, without doing sufficient other harms to balance it out, would be judged "ethical".
The question is: should we only consider the minds of persons -- self-conscious minds that understand they are a mind with a past, present, and future? Or should we also consider merely sentient minds? And if we do consider sentient minds, should we down-weight them in our utility calculation?
Do the experiences of merely sentient minds receive a weight of 0, 1, or somewhere in between?
How much do sentient non-persons count?
Be careful before answering "0". This implies that a person can never treat a merely sentient mind unethically, except in violation of the preferences of other persons. Torturing puppies for passing amusement would be ethically A-OK, so long as you keep it quiet in front of other persons who might mind. I'm not a moral realist -- I don't believe that when I say "X is unethical", I'm describing a property of objective reality. I think it's more like deduction given axioms. So if your utility function really is such that you ascribe 0 weight to the suffering of merely sentient minds, I can't say you're objectively correct or incorrect. I doubt many people can honestly claim this, though.
Is a 1.0 weight not equally ridiculous, though? Let's take a simple negative stimulus, pain. Imagine you had to choose between possible worlds in which either a cognitively normal adult human or a cognitively normal pig received a small shallow cut that crossed a section of skin connected to approximately the same number of nerves. The wound will be delivered with a sterile instrument and promptly cleaned and covered, so the only relevant thing here is the pain. The pig will also feel some fear, but let's ignore that.
You might claim that a utility function that didn't prefer that the pig feel the pain was hopelessly broken. But remember that the weight we're talking about applies to kinds of minds, not members of species. If you had to decide between a cognitively normal adult human, and a human that had experienced some brain damage such that they were merely sentient, would the decision be so easy? How about if you had to decide between a cognitively normal adult human, and a human infant?
The problem with speciesism
If you want to claim that causing the pig pain is preferable to causing a sentient but not self-aware human pain, you're going to have to make your utility function species-sensitive. You're going to have to claim that humans deserve special moral consideration, and not because of any characteristics of their minds. Simply because they're human.
It's easy to go wild with hypotheticals here. What about an alien race that was (for some unimaginable reason) just like us? What about humanoid robots with minds indistinguishable from ours?
To me it's quite obvious that species-membership, by itself, shouldn't be morally relevant. But it's plain that this idea is unintuitive, and I don't think it's a huge mystery why.
We have an emotional knee-jerk reaction to consider harm done to beings similar to ourselves as much worse than harm done to beings different from us. That's why the idea that a pig's pain might matter just as much as a human's makes you twitch. But you mustn't let that twitch be the deciding factor.
Well, that's not precisely correct: again, there's no ethical realism. There's nothing in observable reality that says that one utility function is better than another. So you could just throw in a weighting for non-human animals, satisfy your emotional knee-jerk reaction, and be done with it. However, that similarity metric once made people twitch at the idea that the pain of a person with a different skin pigmentation mattered as much as theirs.
If you listen to that twitch, that instinct that those similar to you matter more, you're following an ethical algorithm that would have led you to the wrong answer on most of the major ethical questions through history. Or at least, the ones we've since changed our minds about.
If I'm happy to arbitrarily weight non-human animals lower, just because I don't like the implications of considering their interests equal, I would have been free to do the same when considering how much the experiences of out-group persons should matter. When deciding my values, I want to be using an algorithm that would've gotten the right answer on slavery, even given 19th century inputs.
Now, having said that the experiences of merely sentient minds matter, I should reiterate that there are lots of kinds of joys and sufferings not relevant to them. Because a rabbit doesn't understand its continued existence, it's not wrong to kill it suddenly and painlessly, out of sight/smell/earshot of other rabbits. There are no circumstances in which killing a person doesn't involve serious negative utility. Persons have plans and aspirations. When I consider what would be bad about being murdered, the momentary fear and pain barely rank. Similarly, I think it's possible to please a person more deeply than a merely sentient mind. But when it comes to a simple stimulus like pain, which both minds feel similarly, it's just as bad for both of them.
When I changed my mind about this, I hadn't yet decided to particularly care about how ethical I was. This kept me from having to say "well, I'm not allowed to believe this, because then I'd have to be vegetarian, and hell no!". I later did decide to be more ethical, but doing it in two stages like that seemed to make changing my mind less traumatic.
[1] I haven't really studied the evidence about infant cognition. It's possible infants are fully self-conscious (as in, have an understanding that they are a mind plus a body that persists through time), but it seems unlikely to me.
[2] Actually I seldom see it stated probabilistically like this. I think this is surely just an oversight? If you have to choose between pushing a button that will save a life with probability 0.99, and cost a life with probability 0.01, surely it's not unethical after the fact if you got unlucky.