Arguments Against Speciesism

28 Post author: Lukas_Gloor 28 July 2013 06:24PM

There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

Comments (474)

Comment author: Pablo_Stafforini 28 July 2013 06:51:59PM *  12 points [-]

A fine piece. I hope it triggers a high-quality, non-mindkilled debate about these important issues. Discussion about the ethical status of non-human animals has generally been quite heated in the past, though happily this trend seems to have reversed recently (see posts by Peter Hurford and Jeff Kaufman).

Comment author: shminux 28 July 2013 08:28:20PM *  12 points [-]

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.

So a consequentialist question would be something like

Where does it make sense to put a boundary between caring and not caring, under what circumstances and for how long?

Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.

Comment author: Xodarap 28 July 2013 08:42:26PM *  5 points [-]

you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Why do you say that? Bacteria, viruses etc. seem to lack not just one, but all of the capacities A-H the OP mentioned.

Comment author: Ruairi 28 July 2013 08:45:16PM *  9 points [-]

I feel like you're saying this:

"There are a great many sentient organisms, so we should discriminate against some of them"

Is this what you're saying?

EDIT: Sorry, I don't mean that bacteria or viruses are sentient. Still, my original question stands.

Comment author: shminux 28 July 2013 09:31:20PM 2 points [-]

All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.

Comment author: Ruairi 28 July 2013 10:23:11PM 5 points [-]

Where does sentience fail as a boundary?

Comment author: RomeoStevens 29 July 2013 09:33:40AM 2 points [-]

if sentience isn't a boolean condition.

Comment author: SaidAchmiz 28 July 2013 09:07:58PM 1 point [-]

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Indeed. I've alluded to this before as "how many chickens would I kill/torture to save my grandmother?" The answer, of course, is N, where N may be any number.

This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:

  1. Additive aggregation of value.
  2. Valuing my grandmother a finite amount (as opposed to an infinite amount).
  3. Valuing a chicken a nonzero amount.

Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway... but it also leads to problems (don't I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).

Throwing out #3 seems unproblematic.

Comment author: shminux 28 July 2013 09:14:30PM 4 points [-]

Throwing out #3 seems unproblematic.

It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.

Comment author: SaidAchmiz 28 July 2013 09:23:25PM 0 points [-]

For what it's worth, I also choose specks in specks/torture, and find the "chain of comparables" argument unconvincing. (I'd be happy to discuss this, but this is probably not the thread for it.)

That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don't think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don't factory-farm dolphins (and I don't think we should), and chickens and cows certainly don't qualify; the question of which humans do or don't qualify is tricky, but that's why I think we shouldn't actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.).

In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.

Comment author: shminux 28 July 2013 09:42:01PM *  1 point [-]

Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?

Comment author: SaidAchmiz 28 July 2013 10:18:25PM 0 points [-]

Sure. However, you raise what is in principle a very solid objection, and so I would like to address it.

Let's say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc.

However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.

What are we to make of this?

In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?).

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

(Of course, it's possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)

Comment author: CarlShulman 28 July 2013 10:36:44PM *  0 points [-]

[Removed.]

Comment author: SaidAchmiz 28 July 2013 10:50:21PM 0 points [-]

Hours you spend helping dogs are hours you could have spent helping humans, e.g. having more money is associated with longer life.

This point is of course true, hence my "all else being equal" clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important.

Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.)

I'm not entirely sure what the relevance of the speed limit example is.

Comment author: DanArmak 28 July 2013 10:40:08PM 1 point [-]

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

Hence this recent post on surreal utilities.

Comment author: Adriano_Mannino 28 July 2013 10:43:17PM 8 points [-]

What about a random human instead of your grandmother? What if the human's/your grandmother's cognitive capacities were lower than the dog's or the chimp's? – What would a good altruist do?

How do you block the "chain of comparables"?

Comment author: shminux 28 July 2013 10:59:09PM -1 points [-]

Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations.

My suspicion that what has to give is the assumption of unlimited transitivity in VNM, but I never bothered to flesh out the details.

Comment author: pragmatist 30 July 2013 02:55:41PM *  1 point [-]

Actually, I believe it's the continuity axiom that rules out lexicographic preferences.

Comment author: shminux 30 July 2013 05:56:33PM *  -1 points [-]

I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.

Comment author: habanero 29 July 2013 05:05:09PM 2 points [-]

However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.

This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf

Comment author: SaidAchmiz 29 July 2013 05:20:34PM *  0 points [-]

As I've said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts.

Edit: And see this thread for a discussion of whether scope neglect applies to my views.

Comment author: SaidAchmiz 29 July 2013 05:23:39PM *  0 points [-]

By the way, the dogs vs. grandma case differs in an important way from specks vs. torture:

The specks are happening to humans.

It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans' utility) while not valuing dogs (or placing dogs on a "lower moral tier" than your grandmother/humans in general).

In other words, "do many specks add up to torture" and "do many dogs add up to grandma" are not the same question.

Comment author: Adriano_Mannino 28 July 2013 10:37:36PM *  8 points [-]

Good question, shminux. Another way of putting it: If cows and chickens don't count, why have any animal protection laws? Their guiding principle usually is the avoidance of unnecessary animal suffering. And if we agree that eating animals (1) causes animal suffering and is (2) unnecessary because we can have animal-free foods that are equally tasty, then the guiding principle of the agreed upon animal protection laws actually already implies that we should stop farming chickens and cows.

Note also that the Three Rs – which guide animal testing in many countries – reaffirm the above principle. Many believe it should be illegal to cause any animal suffering if it's unnecessary, i.e. if there is an acceptable alternative for the purpose. (And if there is not, we are under an obligation to try and create one.) If we take seriously what almost everybody accepts when it comes to animal testing, we should stop farming animals.

It seems that the arguments for according non-human animals a very important place in our practical ethics can only be blocked by claiming that they/their suffering matters zero. If it matters just a little, the aggregate animal suffering is still likely to matter a lot. And even if we are inclined to believe that it matters zero, we should retain some non-negligible uncertainty, at least if our view (like Jeff's) is based on the claim that some not-really-understood (!) combination of suffering with self-awareness, intelligence or other preferences is what makes for moral badness. If we are wrong on this one, the consequences will be catastrophic. We should take this into account.

Comment author: Xodarap 28 July 2013 09:39:26PM *  1 point [-]

The problem with throwing out #3 is you also have to throw out:

(4) How we value a being's moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Which is a rather nice proposition.

Edit: As Said points out, this should be:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Comment author: SaidAchmiz 28 July 2013 09:54:21PM *  0 points [-]

You don't, actually. For example, the following is a function:

Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as "human-level abilities". We define E(a) thus:

a < H : E(a) = 0.
aH: E(a) = f(a), where f(x) is some other function of our choice.

Comment author: Xodarap 28 July 2013 10:10:00PM *  0 points [-]

Fair enough. I've updated my statement:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain).

Otherwise we could let H be "maleness" and justify sexism, etc.

Comment author: SaidAchmiz 28 July 2013 10:25:41PM *  0 points [-]

Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!

Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly "nice" anymore (that is, I don't endorse it, and I don't think most people here who take the "speciesist" position do either).

(By the way, letting H be "maleness" doesn't make a whole lot of sense. It would be very awkward, to say the least, to represent "maleness" as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling "maleness" a "level of abilities" is pretty weird.)

Comment author: Xodarap 28 July 2013 11:37:29PM 0 points [-]

Haha, sure, updated.

But why don't you think it's "nice" to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you're in pain than when others are in pain.

Comment author: SaidAchmiz 28 July 2013 11:48:36PM *  0 points [-]

I probably[1] do as well...

... provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).

[1] Well, at first glance. Actually, I'm not so sure; I don't seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that's what matters.

Comment author: Xodarap 30 July 2013 12:04:45PM 0 points [-]

Well, if you follow that post far enough you'll see that the author thinks animals feel something that's morally equivalent to pain, s/he just doesn't like calling it "pain".

But assuming you genuinely don't think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn't list any supporting evidence.

Comment author: SaidAchmiz 30 July 2013 02:59:22PM 0 points [-]

But assuming you genuinely don't think animals feel something morally equivalent to pain, why?

I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.

I didn't say anything about animals not feeling pain (what does it "morally equivalent to pain" mean?). I said I don't care about animal pain.

... the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we're talking past each other.

Comment author: Vaniver 28 July 2013 09:58:42PM *  6 points [-]

The answer, of course, is N, where N may be any number. ... Throwing out #3 seems unproblematic.

Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don't have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don't expect my intuitions to give good answers in that region. If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.

So it looks like an answer to the 'save' question that avoids the incorrect results is something like "I don't know how many, but I'm pretty sure it's more than a million."

Comment author: SaidAchmiz 28 July 2013 10:07:28PM 0 points [-]

If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.

The answer is, indeed, still the same N.

Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect.

I don't find scope neglect to be a serious objection here. It's certainly relevant in cases of inconsistencies, like the classic "how much would you pay to save a thousand / a million birds from oil slicks" scenario, but where is the inconsistency here? Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?

The "scope neglect" objection also misconstrues what I am saying. When I say "I would kill/torture N chickens to save my grandmother", I am here telling you what I would, in fact, do. Offer me this choice right now, and I will make it. This is the input to the discussion. I have a preference for my grandmother's life over any N chickens, and this is a preference that I support on consideration — it is reflectively consistent.

For "scope neglect" to be a meaningful objection, you have to show that there's some contradiction, like if I would torture up to a million chickens to give my grandmother an extra day of life, but also up to a million to give her an extra year... or something to that effect. But there's no contradiction, no inconsistency.

Comment author: Vaniver 29 July 2013 12:21:01AM *  1 point [-]

Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly?

When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that's how I imagine them, not in terms of individual subjective experience.)

And that's only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you're indifferent between them. When I imagine weighing one person against the global poultry industry, it's not obvious to me that one person is the right choice, and it feels to me that if it's not obvious, you can just increase the number of chickens.

One counterargument to this is "but chickens and humans are on different levels of moral value, and it's wrong to trade off a higher level for a lower level." I don't think that's a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).

Comment author: SaidAchmiz 29 July 2013 12:47:21AM 0 points [-]

I... don't see how your examples/imagery answer my question.

When I imagine weighing one person against the global poultry industry, it's not obvious to me that one person is the right choice, and it feels to me that if it's not obvious, you can just increase the number of chickens.

It is completely obvious to me. (I assume by "global poultry industry" you mean "that number of chickens", since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.)

One counterargument to this is "but chickens and humans are on different levels of moral value, and it's wrong to trade off a higher level for a lower level." I don't think that's a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).

Don't be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that's a separate discussion; I started this subthread from an assumption of basic utilitarianism.

Anyway, I think — with apologies — that you are still misunderstanding me. Take this:

What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you're indifferent between them.

There is no level where I'd be indifferent between them. That's my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?

Comment author: Vaniver 29 July 2013 01:49:13AM 5 points [-]

Anyway, I think — with apologies — that you are still misunderstanding me.

Yes and no. I wasn't aware that you were using a multi-level morality, but agree with you that it doesn't obviously break and doesn't require infinite utilities in any particular level.

That said, my experience has been that every multi-level morality I've looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it's 0 I don't take their confidence as informative. If they're an expert in decision science and eliciting this sort of information, then I do take it seriously, but I'm still suspicious that This Time It's Different.

Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they're behaving irrationally, but I think the more plausible interpretation is that they're acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)

Comment author: SaidAchmiz 29 July 2013 02:18:47AM 1 point [-]

Well, as far as revealed vs. stated preferences go, I don't think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You'll Just Have To Take My Word For It. As for the rest...

It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it's 0 I don't take their confidence as informative.

What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I'd trade any number of chickens to save my grandmother, there's some situation we might encounter, some really large number of chickens, faced with which I would say: "Well, shit. I guess I'll take the chickens after all. Sorry, grandma"?

I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make.

Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like "my preferences do not coherently translate into assigning a real-number value to a chicken"! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.

Comment author: Vaniver 29 July 2013 03:31:42AM 0 points [-]

What would it mean for me to be mistaken about this?

Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.

I should also make clear that I'm not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you're not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it's unlikely that you will ever come across a situation where the value system "grandma first, then chickens" will disagree with "grandma is worth a really big number of chickens," and separating the two will be unlikely to have any direct meaningful impact.

But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it's important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as "a chance of saving grandma."

Comment author: SaidAchmiz 29 July 2013 03:54:09AM 1 point [-]

Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma.

Any chance of saving my grandmother is worth any number of chickens.

I should also make clear that I'm not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility.

Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.)

For this particular case, it's unlikely that you will ever come across a situation where the value system "grandma first, then chickens" will disagree with "grandma is worth a really big number of chickens," and separating the two will be unlikely to have any direct meaningful impact.

Perhaps. But you yourself say:

But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere.

So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".

Comment author: MTGandP 29 July 2013 03:25:00AM 0 points [-]

Suppose you're walking down the street when you see a chicken trapped under a large rock. You can save it or not. If you save it, it costs you nothing except for your time. Would you save it?

Comment author: SaidAchmiz 29 July 2013 03:46:52AM 0 points [-]

Maybe.

Realistically, it would depend on my mood, and any number of other factors.

Why?

Comment author: MTGandP 29 July 2013 04:34:47AM *  1 point [-]

If you would save the chicken, then you think its life is worth 10 seconds of your life, which means you value its life as about 1/200,000,000th of your life as a lower bound.

Comment author: SaidAchmiz 29 July 2013 05:01:42AM *  0 points [-]

In your view, how much do I think the chicken's life is worth if I would either save it or not save it, depending on factors I can't reliably predict or control? If I would save it one day, but not save it the next? If I would save a chicken now, and eat a chicken later?

I don't take such tendencies to be "revealed preferences" in any strong sense if they are not stable under reflective equilibrium. And I don't have any belief that I should save the chicken.

Edit: Removed some stuff about tendencies, because it was actually tangential to the point.

Comment author: Armok_GoB 29 July 2013 10:23:40PM 0 points [-]

My intuition here is solid to an hilariously unjustified degree on "10^20".

Comment author: jkaufman 28 July 2013 08:30:16PM *  19 points [-]

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The "why species membership really is an absurd criterion" section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)

Comment author: Jabberslythe 28 July 2013 08:38:54PM 4 points [-]

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

What if you were killed immediately afterwards, so long term memories wouldn't come into play?

Comment author: jkaufman 28 July 2013 08:48:19PM *  2 points [-]

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

What if you were killed immediately afterwards

If you offered me the choice between:

A) 50% chance you are tortured and then released, 50% chance you are killed immediately

B) 50% chance you are tortured and then killed, 50% chance you are released immediately

I would strongly prefer B. Is that what you're asking?

Comment author: Jabberslythe 28 July 2013 09:04:24PM 0 points [-]

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?

I would strongly prefer B. Is that what you're asking?

I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.

Comment author: jkaufman 28 July 2013 09:09:16PM 0 points [-]

do the two situations not seem equivalent

I'm sorry, I'm confused. Which two situations?

we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Comment author: Jabberslythe 28 July 2013 09:21:58PM *  1 point [-]

I'm sorry, I'm confused. Which two situations?

A) Being tortured as you are now

B) Having your IQ and cognitive abilities lowered then being tortured.

EDIT:

I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory.

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.

Comment author: jkaufman 29 July 2013 02:26:32AM 0 points [-]

A) Being tortured as you are now B) Having your IQ and cognitive abilities lowered then being tortured.

Strong preference for (B), having my cognitive abilities lowered to the point that there's no longer anyone there to experience the torture.

Comment author: Xodarap 28 July 2013 08:48:03PM *  5 points [-]

I wasn't able to glean this from your other article either, so I apologize if you've said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don't care about their suffering?

(And in either case, why?)

Comment author: jkaufman 28 July 2013 08:52:51PM *  3 points [-]

I think suffering is qualitatively different when it's accompanied by some combination I don't fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering is morally relevant.

Comment author: Lukas_Gloor 28 July 2013 09:12:15PM 14 points [-]

How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn't you at least attribute some amount of concern for sentient beings that lack self-awareness?

Comment author: Xodarap 28 July 2013 09:36:36PM 1 point [-]

It strikes me that the only "disagreement" you have with the OP is that your reasoning isn't completely spelled out.

If you said, for example, "I don't believe pigs' suffering matters as much because they don't show long-term behavior modifications as a result of painful stimuli" that wouldn't be a speciesist remark. (It might be factually wrong, though.)

Comment author: davidpearce 28 July 2013 11:15:59PM *  17 points [-]

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

Comment author: Kawoomba 28 July 2013 11:21:01PM 0 points [-]

"Accompanied" can also mean "reflected upon after the fact".

I agree with your last sentence though.

Comment author: Emile 30 July 2013 09:10:22PM 0 points [-]

So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering.

There's missing something at the end, like "... is morally relevant", right?

Comment author: Lukas_Gloor 28 July 2013 08:50:09PM *  12 points [-]

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

Comment author: jkaufman 28 July 2013 09:12:04PM 5 points [-]

I'm assuming that you think human infants can suffer

I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.

Comment author: atucker 29 July 2013 03:41:50AM 3 points [-]

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Comment author: threewestwinds 30 July 2013 01:51:04AM 1 point [-]

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat.

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

Comment author: Jiro 30 July 2013 03:17:36AM *  1 point [-]

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale.

By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses.

Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.

Comment author: Estarlio 29 July 2013 12:14:03AM 2 points [-]

How do you avoid it being kosher to kill you when you're asleep - and thus unable to perform at your usual level of consciousness - if you don't endorse some version of the potential principle?

If you were to sleep and never wake, then it wouldn't necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.

Comment author: jkaufman 29 July 2013 02:24:08AM *  5 points [-]

Killing me when I'm asleep is wrong for the same reason as killing me instantly and painlessly when I'm awake is wrong. Both ways I don't get to continue living this life that I enjoy.

(I'm not as anti-death as some people here.)

Comment author: Estarlio 29 July 2013 11:47:54AM -1 points [-]

So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you'd approve of me just killing you. I mean ideally you'd probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist - it's even in the text you linked

A death is bad because of the effect it has on those that remain and because it removes the possibilty for future joy on the part of the deceased.

So, bearing in mind that you don't always seem to be performing at your normal level of thought - e.g. when you're asleep - how do you bind that principle so that it applies to you and not infants?

Comment author: jkaufman 29 July 2013 12:10:37PM 0 points [-]

I don't think you should kill infants either, again for the "effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased" logic.

Comment author: Estarlio 29 July 2013 12:29:35PM *  -1 points [-]

How do you reconcile that with:

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

Comment author: jkaufman 29 July 2013 12:42:41PM *  1 point [-]

The "as long as the people are ok with it" deals with the "effect it has on those that remain". The "removes the possibility for future joy on the part of the deceased" remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.

Comment author: Estarlio 29 July 2013 01:44:50PM *  -1 points [-]

Granted, but do you really think that they're going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?

To link that back to the marginal cases argument, which I believe - correct me if I'm wrong - you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point - may be wrong - you'd admit isn't nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?

Comment author: jkaufman 29 July 2013 02:08:52PM 2 points [-]

that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies?

The value a society might get from it isn't limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you're right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that.

that the utility gained for human society outweighs the suffering and death of the animals?

The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.

Comment author: Estarlio 29 July 2013 03:27:43PM *  -1 points [-]

The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.

Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They're not as smart as humans but it's not like they've got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard - what's your criteria for being smart enough to be morally considerable, and why?

Comment author: MugaSofer 29 July 2013 04:08:40PM 0 points [-]

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Those are not the same thing. They're not even remotely similar beyond both involving brain surgery.

Speciesism has always seemed like a straw-man to me.

Me too, but I never could persuade the people arguing for it of this fact :(

Comment author: jkaufman 29 July 2013 05:05:50PM *  1 point [-]

Those are not the same thing.

Agreed.

They're not even remotely similar beyond both involving brain surgery.

I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first.

I never could persuade the people arguing for it ...

Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.

Comment author: MugaSofer 29 July 2013 10:57:59PM -1 points [-]

Those are not the same thing.

Agreed.

They're not even remotely similar beyond both involving brain surgery.

I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first.

Ah, OK.

Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.

Hah, yes. Sorry, I thought you were complaining it was actually a strawman :/ Whoops.

Comment author: Ruairi 28 July 2013 08:36:13PM *  14 points [-]

"If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist."

David Pearce sums up antispeciesism excellently saying:

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Comment author: CarlShulman 29 July 2013 10:33:29AM 7 points [-]

sums up antispeciesism excellently saying: "The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

If one takes "other things being equal" very seriously that could be quite vacuous, since there are so many differences in other areas, e.g. impact on society and flow-through effects, responsiveness of behavior to expected treatment, reciprocity, past agreements, social connectedness, preferences, objective list welfare, even species itself...

The substance of the claim has to be about exactly which things need to be held equal, and which can freely vary without affecting desert.

Comment author: SaidAchmiz 28 July 2013 08:46:13PM *  3 points [-]

I've read the first part of the post ("What is Speciesism?"), and have a question.

Does your argument have any answer to applying modus tollens to the argument from marginal cases?

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)

Edit: Having now read the rest of your post, I see that you... sort-of address this point. But to be honest, I don't think you take the opposing position very seriously; I get the sense that you've constructed arguments that you think someone on the opposite side would make, if they held exactly your views in everything except, inexplicably, this one area, and these arguments you then knock down. In short, while I am very much in favor of having this discussion and think that this post is a good idea... I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

Comment author: Xodarap 28 July 2013 08:54:04PM 2 points [-]

I think it's ok to kill human newborns/infants

I think the relevant response would be torturing human infants, and other marginal cases.

Comment author: SaidAchmiz 28 July 2013 09:01:20PM 0 points [-]

Yep, fair enough. I've changed my post to include this.

Comment author: Lukas_Gloor 28 July 2013 08:59:19PM *  4 points [-]

No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views.

And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.

Comment author: SaidAchmiz 28 July 2013 09:00:41PM 0 points [-]

And here I think I can make a strong case that suffering is bad independently of where it happens.

I would very much like to see that case made!

Comment author: Lukas_Gloor 28 July 2013 09:17:12PM 3 points [-]

It's in the article. If you're not impressed by it then I'm indeed out of arguments.

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least.

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

There's also a hyperlink in the first paragraph referring to section 6 of the linked paper.

Comment author: SaidAchmiz 28 July 2013 09:26:55PM 0 points [-]

Ok. Yeah, I don't find any of those to be strong arguments. Again, I would like to urge you to consider and address the points brought up in this post.

Comment author: Lukas_Gloor 28 July 2013 09:42:17PM *  8 points [-]

I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

The post you link to makes five points.

1) and 2) don't concern the arguments I'm making because I left out empirical issues on purpose.

3) is also an empirical issue that can be applied to some humans as well.

4) is the most interesting one.

Something About Sapience Is What Makes Suffering Bad

I sort of addressed this here. I must say I'm not very familiar with this position so I might be bad at steelmanning it, but so far I simply don't see why intelligence has anything to do with the badness of suffering.

As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren't sentient, I genuinely wouldn't argue about giving them moral consideration.

Comment author: [deleted] 29 July 2013 01:59:01PM 4 points [-]

bright line

Huh, a mainstream term for what LWers call a Schelling fence!

Comment author: MugaSofer 29 July 2013 04:04:43PM 0 points [-]

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

No, that would be when we fetch the pitchforks.

[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)

The only time I heard such an argument, it wasn't their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.

Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.

Comment author: SaidAchmiz 29 July 2013 04:40:29PM 1 point [-]

The only time I heard such an argument, it wasn't their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined.

Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.

No, this is definitely my true rejection. To expand a bit, take the infanticide case as an example: I think infanticide should be illegal, but I don't think it should be considered murder or anything close to it, nor punished nearly as severely.

Basically, there's no "real" line between sapience and non-sapience, and humans, in the course of their development, start out as cognitively inert matter and end up as sapient beings. But since we don't think evaluating in every single case is feasible, or reliable in the "border region" cases, or likely to lead to consistently (morally) good outcomes in practice (due to assorted cognitive and institutional limitations), we want to draw the line way back in the development process, where we're sure there's no sapience and killing the developing human is morally ok. Where specifically? Well, since this is a pragmatic and not a moral consideration, there is no unique morally ordained line placement, but there is a natural "bright line": birth. Birth is more or less in the desired region of time, so that's where we draw it.

Now, since we drew the line for pragmatic reasons, we are perfectly aware that the person who commits infanticide has not really done anything morally wrong. But on the other hand, we want to discourage people from redrawing the line on an individual basis, from "taking line placement into their own hands", so to speak, because then we're back to the "evaluating in every case is not a good idea" issue. But on the third hand, such discouragement should not take the form of putting the poor person in jail for murder! The problem is not that important; the well-being and happiness of an adult human for a large chunk of their life is worth more than the (nonzero, but small) chance that line degradation will lead to bad outcomes! Make it a lesser offense, and you've more or less got the best of both worlds. (Equivalent to assault, perhaps? I don't know, this is a practical question, and best settled with the help of experts in criminal justice and public policy.)

Comment author: Morendil 28 July 2013 09:34:24PM 3 points [-]

What properties do human beings possess that makes us think that it is wrong to torture them?

Does it have to be the case that "the properties that X possesses" is the only relevant input? It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

For instance, if I came across a kid torturing a mouse (even a fly) I would be horrified, but I would respond differently to a cat torturing a mouse (or a fly).

Comment author: Lukas_Gloor 28 July 2013 09:49:26PM *  3 points [-]

What if it is done by a baby or a kid with mental impairments so she cannot follow moral/social norms? I see no reason to treat the situation differently in such a case. (Except that one might want to talk to the parents of the kid in order to have them consider a psychological check-up for their child.)

Comment author: DanArmak 28 July 2013 10:32:23PM 1 point [-]

I see no reason to treat the situation differently in such a case.

Differently from a normal kid, or differently from a cat? (I share Morendil's moral intuitions regarding his example.)

Comment author: Lukas_Gloor 28 July 2013 10:55:19PM 6 points [-]

From the cat. I would in fact press a magic button that turns all carnivores into vegans. The cat (or the kid) doesn't know what it is doing and cannot be meaningfully blamed, but I still consider this to be a harmful action and I would want to prevent it. Who commits the act makes no difference to me (or only for indirect reasons).

Comment author: Xodarap 28 July 2013 10:17:28PM 0 points [-]

It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

Why?

It seems to me like the only (consequentialist) justification is that they will then go on to torture others who have the ability to feel pain, and so it's still only the victims' properties which are relevant.

Comment author: Morendil 29 July 2013 08:47:11PM 0 points [-]

The more I perceive the torturer to be "like me", the more seeing this undermines my confidence in my own moral intuitions - my sense of a shared identity.

The fly case is particularly puzzling, as I regard flies as not morally relevant.

Comment author: Nornagest 29 July 2013 09:06:16PM 1 point [-]

I'd regard a kid pulling wings off a fly as worrying not because I particularly care about flies, but more because it indicates a propensity to do similar things to morally relevant agents. Not much chance of that becoming a problem for a cat.

Comment author: Manfred 28 July 2013 09:34:35PM *  4 points [-]

Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form.

The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than "humans are a platonic form." Currently you just knock down a very weak argument or two and then rush to conclusion.

EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced "argument from future potential," which is what Jabberslythe replied to.

However, you really do only knock down weak arguments. What if we simply define categories more robustly than "platonic forms," like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there's no point in talking about platonic forms.

For the argument from "one will be human and the next will be not" how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don't just have to avoid deliberately strawmanning, you have to actively steelman :)

Comment author: Jabberslythe 28 July 2013 09:48:58PM *  0 points [-]

Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.

Comment author: Xodarap 28 July 2013 09:59:17PM *  0 points [-]

I think the relevant point is the part about racism, sexism etc.. If allow moral value to depend on things other than the beings' relevant attributes, then sure we can be speciesist. But we also can be racist, sexist, ...

Comment author: Lukas_Gloor 28 July 2013 10:22:00PM 3 points [-]

The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue "if you're going for that amount of arbitrariness anyway, why even bother?" The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.

Comment author: [deleted] 29 July 2013 02:14:20PM 2 points [-]
Comment author: Vaniver 28 July 2013 09:49:11PM *  12 points [-]

None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.

Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)

(Looking at the comments, Manfred makes a similar argument more vividly over here.)

Comment author: Lukas_Gloor 28 July 2013 09:56:28PM *  10 points [-]

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two". I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn't considered.

Comment author: Vaniver 29 July 2013 01:59:39AM 7 points [-]

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well.

And then arguments A through E will not argue for treating the enhanced animals differently from humans.

And the argument from potentiality would also prohibit abortion or experimentation on embryos.

It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.

I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two".

I think this is a hazard for any "Arguments against X" post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.

Comment author: threewestwinds 30 July 2013 01:29:37AM *  1 point [-]

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).

Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.

Comment author: Vaniver 30 July 2013 01:57:14AM 1 point [-]

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.

Comment author: Xodarap 28 July 2013 10:02:00PM 2 points [-]

Is it possible to create some rule like this? Yeah, sure.

The problem is that you have to explain why that rule is valid.

If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it's not clear why since their phenomenal pain is identical.

Comment author: DanArmak 28 July 2013 10:30:44PM 1 point [-]

Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.

Comment author: Xodarap 28 July 2013 11:37:15PM 0 points [-]

This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.

If you believe sole justification for a moral proposition is that you think it's intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?

Comment author: DanArmak 29 July 2013 10:42:45AM 1 point [-]

I'm a moral anti-realist. I don't think there's a "true objective" ethics out there written into the fabric of the Universe for us to discover.

That doesn't mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.

Nobody is ever "right" or "wrong" about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.

Comment author: Vaniver 29 July 2013 12:06:00AM 3 points [-]

The problem is that you have to explain why that rule is valid.

It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).

These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.

Comment author: MugaSofer 29 July 2013 03:57:38PM 4 points [-]

What about a similar gradual rule for varying sentience levels of animal?

Comment author: Vaniver 29 July 2013 08:40:01PM 1 point [-]

What about a similar gradual rule for varying sentience levels of animal?

A quantitative measure of sentience seems much more reasonable than a binary measure. I'm not a biologist, though, and so don't have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from 'doesn't have a central nervous system' to 'beyond humans' to be possible, but don't know if there are bands that aren't occupied for various practical reasons.

Comment author: Xodarap 30 July 2013 12:08:52PM 0 points [-]

I don't think anyone is advocating a binary system. No one is supporting voting rights for pigs, for example.

Comment author: [deleted] 29 July 2013 02:18:14PM -2 points [-]

If Alice bets $10,000 against $1 on heads and Bob bets $10,000 against $1 on tails, they're both idiots, even though only one of them will lose.

Comment author: MugaSofer 29 July 2013 03:58:30PM -1 points [-]

If we can only stop one, sure. If we could stop both, why not do so?

Comment author: davidpearce 29 July 2013 10:18:42AM 11 points [-]

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

Comment author: Vaniver 29 July 2013 10:51:12AM 3 points [-]

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves?

My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday.

But their lack of cognitive sophistication doesn't make them any less sentient.

Agreed, mostly. (I think it might be meaningful to refer to syntax or math as 'senses' in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)

Comment author: davidpearce 29 July 2013 11:56:58AM 7 points [-]

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

Comment author: Vaniver 29 July 2013 08:37:28PM 1 point [-]

What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

I'm not sure what this would look like, actually. The first thing that comes to mind is Down's Syndrome, but the impression I get is that that's a much smaller reduction in cognitive capacity than the one you're describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down's, and I suspect that the more extreme the reduction, the easier it would be to come to that direction.

I hope you don't mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don't think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.

Comment author: MugaSofer 29 July 2013 03:55:59PM 0 points [-]

Speaking as a vegetarian for ethical reasons ... yes. That's not to say they don't deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.

Comment author: DxE 29 July 2013 09:09:14PM 4 points [-]

My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.

Comment author: Vaniver 29 July 2013 11:58:58PM 1 point [-]

My sperm has the potential to become human.

It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don't turn into people.

Comment author: Lukas_Gloor 30 July 2013 12:11:28AM 4 points [-]

Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?

Comment author: Vaniver 30 July 2013 01:48:13AM 0 points [-]

Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?

Probably, but in such a world, I don't think human life would be scarce, and I think that the value of human life would plummet accordingly. They would still represent a significant time and capital investment, and so be more valuable than the em case, but I think that people would be seen as much more replaceable.

It is possible that human reproduction is horrible by many moral standards which seem reasonable. I think it's more convenient to jettison those moral standards than reshape reproduction, but one could imagine a world where people were castrated / had oophorectomies to prevent gamete production, with reproduction done digitally from sequenced genomes. It does not seem obviously worse than our world, except that it seems like a lot of work for minimal benefit.

Comment author: katydee 28 July 2013 10:02:46PM 12 points [-]

I would prefer to see posts like this in the Discussion section.

Comment author: Lukas_Gloor 28 July 2013 10:04:43PM 3 points [-]

May I ask why?

Comment author: katydee 28 July 2013 10:35:24PM 5 points [-]

I think Main should be for posts that directly pertain to rationality. This post doesn't seem to do that.

That said, my standards for what belongs in main seem somewhat different from those of other users. For instance I think "The Robots, AI, and Unemployment Anti-FAQ" belongs in Discussion as well, and that post is not only in Main but promoted to boot.

Comment author: SaidAchmiz 28 July 2013 10:52:07PM 2 points [-]

Upvoted for the "directly pertain to rationality" rule of thumb; I agree with that. That said, I thought that the Anti-FAQ was appropriate for Main.

Comment author: Larks 29 July 2013 12:49:16PM 2 points [-]

The anti-FAQ was of much higher quality.

Comment author: Lukas_Gloor 29 July 2013 03:24:10PM *  9 points [-]

Since grandparent received so many upvotes, I'm going to explain my reasoning for posting in Main:

Rules of thumb:

Your post discusses core Less Wrong topics.

The material in your post seems especially important or useful.

[...]

(At least one of) LW's primary goal(s) is to get people thinking about far future scenarios to improve the world. LW is about rationality, but it is also about ethics. Whether anti-speciesism is especially important or useful is something that people have different opinions on, but the question itself is clearly important because it may lead to different/adjusted prioritizing in practice.

Comment author: katydee 29 July 2013 05:51:32PM 0 points [-]

I disagree with the FAQ in that respect (among others-- see for instance my thoughts on the use of the term "tapping out"). My preference is that people only post to Main if their post discusses core Less Wrong topics, and maybe not even then.

Comment author: CarlShulman 28 July 2013 10:14:23PM *  28 points [-]

I agree that species membership as such is irrelevant, although it is in practice an extremely powerful summary piece of information about a creature's capabilities, psychology, relationship with moral agents, ability to contribute to society, responsiveness in productivity to expected future conditions, etc.

Animal happiness is good, and animal pain is bad. However, the word anti-speciesism, and some of your discussion, suggests treating experience as binary and ignoring quantitative differences, e.g. here:

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments.

This leaves out the idea of the quantity of experience. In human split-brain patients the hemispheres can experience and act quite independently without common knowledge or communication. Unless you think that the quantity of happiness or suffering doubles when the corpus callosum is cut, then happiness and pain can occur in substructures of brains, not just whole brains. And if intensive communication and coordination were enough to diminish moral value why does this not apply to social groups like firms, herds, flocks, hives and the like?

Animals vary enormously in the number of neurons and substructures, including ones engaged in reinforcement learning responsive to pleasure and pain. For example, a fly's brain contains 100,000 neurons, where a human's contains about a million times as many. Here are brain masses for some animals:

  • Adult elephants at around 5000 g
  • Adult humans 1300-1400g
  • Chimpanzees are about 420 g, about a 3:1 ratio with humans, with the ratio for cortex neurons around 3:1 to 4:1
  • Cows are 425-458g, about a 3:1 ratio; if their cortex neuron counts resemble horses that would be closer to 8:1
  • Pigs are at 180 g, a ratio of 7.5:1
  • Domestic cats stand at 25-30 g, ~50:1 with the cortex ratio somewhat bigger
  • Pekin Duck at 6.3 g, 214:1 ratio
  • Owl brains are 2.2 g, around 600:1, and European quail at 0.9 g, about 1500:1
  • Goldfish have 0.097 g, just under a 14,000:1 ratio

Particularly for birds, fish, and insects one sees extremely large ratios. If, as is quite plausible in light of the decentralized operations of brains (stunningly demonstrated in split-brain patients, but also a routine feature of information processing in nervous systems), smaller subsystems can experience pleasure and pain, then animals with large nervous systems may be orders of magnitude more important than one would otherwise think. Importantly, this is not a consideration lowering the expected experience of animals with small nervous systems, but increasing the expected experience of animals with large nervous systems, so it does not need to be held with very high confidence to much affect behavior: "what if small neural systems suffer and delight?" is analogous to "what if snails sufffer and delight?").

Would you say that making such adjustments is speciesist? For example wikipedia gives the world chicken population as 24 billion, mostly kept in horrible conditions, and 1.3 billion cows. If one ignores nervous system scale the welfare of the chickens dominates in importance, but if one thinks that quantity of experience scales then the aggregate welfare of the cows looms larger. Is it speciesist to prioritize cows over chickens or fish on this basis?

Comment author: DanArmak 28 July 2013 10:27:37PM *  0 points [-]

Here are brain masses for some animals:

Isn't it better to consider brain-to-body mass ratios? A lion isn't 1.5 orders of magnitude smarter than a housecat. I wouldn't assume that quantity of experience is linear in the number of neurons.

Comment author: CarlShulman 28 July 2013 10:46:11PM 2 points [-]

Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.

But if we're talking about reinforcement learning and sensory experience in themselves, we're not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.

Comment author: Lukas_Gloor 28 July 2013 10:29:26PM 13 points [-]

I fully agree with this point you make, I should have mentioned this. I think "probabilistic discounting" should refer to both "probability of being sentient" and "intensity of experiences given sentient". I'm not convinced that (relative) brain size makes a difference in this regard, but I certainly wouldn't rule it out, so this indeed factors in probabilistically and I don't consider this to be speciesist.

Comment author: jkaufman 29 July 2013 12:09:06PM *  0 points [-]

How small a subsystem can experience pleasure or pain? If we developed configurations specifically for this purpose and sacrificed all the other things you normally want out of a brain we could likely get far more sentience per gram of neurons than you get with any existing brain. If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad?

EDIT: expanded this into a top level post.

Comment author: CarlShulman 29 July 2013 12:59:47PM 2 points [-]

I don't think that we should be confident that such things are all that matter (indeed, I think that's not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot).

If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad?

I would answer "yes" and "yes," especially in expected value terms.

Comment author: [deleted] 29 July 2013 07:58:10PM 0 points [-]

For example wikipedia gives the world .

I think there's a link not showing due to broken formatting.

Comment author: Armok_GoB 29 July 2013 09:55:19PM 0 points [-]

Brain size or number of neurons might work within a general group such as "mammals", however for example birds seem to be significantly smarter in some sense than a mammal of equivalently-sized brain, probably accounting for some difference in underlying architecture.

Comment author: CarlShulman 29 July 2013 10:46:25PM 1 point [-]

Some of the relevant differences to look at are energy consumption, synapses, relative emphasis on different brain regions, selective pressure on different functions, sensory vs cognitive processing, neuron and nerve size (which affects speed and energy use), speed/firing rates. I'm just introducing the basic point here. Also see my other point about the distinction between intelligence and experience.

Comment author: Kawoomba 28 July 2013 10:21:02PM 3 points [-]

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best? People function based on heuristics, which are calibrated on general cases, not on marginal cases. While I'm all for showing inconsistencies in one's statements, there is no inconsistency in saying "as a general rule, I value X, but in these cases, I value Y, which is different from X".

Why the impetus towards some one-size-fit-all solution? And more importantly, why disallow that marginal cases get special "if-clauses"?

Imagine forcing a programmer to treat all incoming data with the exact same rule. It would be a disaster. Adding a "as a general rule" solves the inconsistencies, and it's not cheating, and it's not something in need of fixing.

Comment author: DanArmak 28 July 2013 10:44:58PM 2 points [-]

If you want your choices to be consistent over time, you still need a meta-rule for choosing and modifying your rules. How do you know what exceptions to make?

Personally, I don't think my choices (as a human) can be consistent in this sense, and I'm pretty resigned to following my inconsistent moral intuitions. Others disagree with me on this.

Comment author: Kawoomba 28 July 2013 10:51:58PM 0 points [-]

Your choices won't be consistent over time anyways, because you won't be consistent over time. For your Centenarian self, the current you is a but a distant memory.

Comment author: DanArmak 28 July 2013 10:57:26PM 1 point [-]

That my desires won't be consistent over very long periods of time, is no reason to make my choices inconsistent over short periods of time when my desires don't change much.

Comment author: MugaSofer 29 July 2013 03:51:14PM 0 points [-]

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best?

Well, obviously this wouldn't hold for, say, paperclippers ... but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

Imagine forcing a programmer to treat all incoming data with the exact same rule.

Such a (highly complex) rule is known as a "program".

Comment author: wedrifid 29 July 2013 04:00:11PM 1 point [-]

but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

As a bonus, the exception class of "enemies" and "immoral monsters" tends to be contrived to include anyone who has a sufficient degree of difference in ethical preferences. All True humans are ethically united...

Comment author: Jiro 29 July 2013 06:37:36PM *  0 points [-]

In context here, a "rule" is shorthand for a general rule, not for any sort of algorithm whatsoever. A rule that describes a specific case by name is not a general rule.

most people seem to think human ethics are not mutually contradictory

Thought experiment: Go up to a random person and find out how they avoid the Repugnant Conclusion. Repeat with some other famous ethical paradoxes. Even if some of those have solutions, you can bet the average person 1) won't have thought about them, and 2) won't be able to come up with a solution that holds up to examination.

Most people have not thought about enough marginal cases involving human ethics to be able to determine whether human ethics is mutually contradictory.

Comment author: MugaSofer 29 July 2013 10:50:40PM *  -1 points [-]

In context here, a "rule" is shorthand for a general rule, not for any sort of algorithm whatsoever. A rule that describes a specific case by name is not a general rule.

That was mostly a joke :)

(My point, if you could call it such, was that morality need only be consistent, not simple - although most special cases turn out to be caused by bias, rather than actual special cases, so it was a rather weak point. And, apparently, a rather weak joke.)

Thought experiment: Go up to a random person and find out how they avoid the Repugnant Conclusion. Repeat with some other famous ethical paradoxes. Even if some of those have solutions, you can bet the average person 1) won't have thought about them, and 2) won't be able to come up with a solution that holds up to examination.

And yet, funnily enough,most people agree on most things, and the marginal cases are not unique for every person. Ethics, as far as I can tell, is a part of the psychological unity of mankind.

That said, there is the much more worrying prospect that these common values could be internally incoherent, but we seem to have intuitions for resolving conflicts between lower-level intuitions and I think - hope - it all works out in the end.

(Kawoomba has stated that he considers it ethical for a parent to destroy the earth rather than risk their family, though, so perhaps I'm being overly generous in this regard. pulls face)

Comment author: DanArmak 28 July 2013 10:22:30PM *  5 points [-]

While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I'm posting mine too. (Writing quickly is hard!)

In practice we must have a quantitative model of how much "moral value" to assign an animal (or human). I think your position that:

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

Is wrong, and the reasons for that fall out of your own arguments.

As you point out, there is a continuum between any two living things (common descent). Nevertheless we all think that at least some animals have zero, or nearly zero, moral weight: insects, perhaps, but you can go all the way to amoebas. You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option. Similar arguments have of course been made about the continuum between a sperm and an egg, and an eventual human being.

Option 1 lets you assign non-human animals moral value. But then, you must specify the criteria you use to calculate that value, from your list A-G or otherwise. These same criteria will then tell you that some humans have less moral value then others: children, people with advanced dementia or other severe mental deficiencies, etc. Some biological humans may have much less value than, say, a chicken (babies), or none at all (fetuses). Also, at least some post-humans, aliens, and AIs would have far more moral value than any human - even to the point of becoming utility monsters for total utilitarians.

Option 2 is completely arbitrary in terms of what animals you value, so (among its other problems) people won't be able to agree about it. And if you don't determine moral value by measuring some underlying property, you won't be able to determine the value of radical new varieties, such as post-humans or AIs.

You seem to support option 2 (value everyone equally) but you don't say where you draw the line - and that's the crucial question.

My own position is option 1, open to modification against failure modes like utility monsters that would conflict too strongly with my other moral intuitions.

The claim is that there is no way to block this conclusion without: 1. using reasoning that could analogically be used to justify racism or sexism or 2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

My reasoning can't justify racism and sexism, because my moral criteria don't differ noticeably between sexes and races. This is an empirical fact. If it were true that e.g. some race was less sentient than other races, then that would be a valid reason to assign people of that race less moral value. But it's just not true.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway? Your utility function can't be separate from your morals; on the contrary it must incorporate your morals. (Inconsistent morals are a problem, but without a single VNM-compliant utility function, utilitarianism can't tell you anything at all.)

Some other notes:

H: What I care about / feel sympathy or loyalty towards

I would like to note that this is actual basis of almost all human moral reasoning, and all the rest is post-facto rationalizations. When those rationalizations come in conflict with moral intuitions, they are labelled "repugnant conclusions". I think you dismiss this factor far too lightly.

those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering.

I am willing to bite the bullet about babies, quite easily in fact. I assign no more value to newborn human babies than I do to chickens. I only care about babies insofar other humans care about babies.

I do care about animal suffering - in proportion to some of the measures A-G on your list, so less than human suffering, but (for many animals) more than human baby suffering.

I wouldn't mind treating babies like we treat some farm animals; that is not because I value those animals as highly as I do humans, but because I value both babies and humans much less than I do adult humans. (Some farming methods are acceptable to me, and some are not.)

A sentient being is one for whom "it feels like something to be that being".

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

Comment author: Lukas_Gloor 28 July 2013 10:44:56PM *  6 points [-]

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.

You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option.

I'm arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn't matter, some people in fact have this view.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway?

By "utilitarianism" I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because "you'd have to be okay with torturing babies" is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?>>

I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.

Comment author: DanArmak 28 July 2013 11:03:31PM 0 points [-]

I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.

What evidence do you have for thinking that your first-person intuitions about sentience "cut reality at its joints"? Maybe if you analyze what goes through your head when you think "sentience", and then try to apply that to other animals (never mind AIs or aliens), you'll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature.

If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that "sentience" was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?

Comment author: Lukas_Gloor 28 July 2013 11:21:56PM *  4 points [-]

If I understand it correctly, this is the position endorsed here. I don't think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn't seem to be endemic to the treatment of non-human animals though, you'd have it with any kind of utility function that values well-being.

Comment author: timtyler 28 July 2013 10:31:27PM *  1 point [-]

Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others?

Typically human xenophobia doesn't single out one attribute. The similar are treated preferentially, the different are exiled, shunned, excluded or slaughtered. Nature builds organisms like that: to favour kin and creatures similar, and to give out-group members a very wide berth. So: it's no surprise to find that humans are often racist and speciesist.

Comment author: Qiaochu_Yuan 29 July 2013 12:01:06AM *  13 points [-]

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

Comment author: Lukas_Gloor 29 July 2013 12:07:33AM 12 points [-]

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.

Comment author: Vaniver 29 July 2013 01:20:19AM 10 points [-]

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness.

I don't think that's a "but on the other hand;" I think that's a "it is a good way to raise awareness because it promotes mindkilled attitude."

Comment author: SaidAchmiz 29 July 2013 02:30:13AM 2 points [-]

Actually, I think it's precisely the parallels to racism and sexism that are invalid. Perhaps ableism? That's closer, at any rate, if still not really the same thing.

Comment author: Zvi 29 July 2013 01:07:49PM 11 points [-]

Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

Comment author: Zvi 29 July 2013 12:59:42PM 6 points [-]

It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!

Comment author: Lukas_Gloor 29 July 2013 02:03:59PM 5 points [-]

No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.

Comment author: Zvi 29 July 2013 02:32:58PM 1 point [-]

Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change.

Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?

Comment author: Lukas_Gloor 29 July 2013 02:56:47PM *  2 points [-]

No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).

Comment author: Zvi 29 July 2013 03:22:37PM 1 point [-]

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument:

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

Comment author: Lukas_Gloor 29 July 2013 03:35:49PM 0 points [-]

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Exactly. This is because the overall goal is increasing utility, and not a societal norm of non-discrimination. (This is of course assuming that we are consequentialists.) My arguments against discrimination/speciesism apply at the normative level, when we are trying to come up with a definition of utility.

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

I wouldn't classify this as sexism/racism. If there are sound reasons for considering the properties in question relevant, then treating beings of different species differently because of a correlation between species, and not because of the species difference itself, is in my view not a form of discrimination.

As I wrote:

It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

Comment author: MugaSofer 29 July 2013 03:46:50PM 1 point [-]

Maybe I was already mindkilled (vegetarian speaking), but it seems like a precisely appropriate term to use, given the content of this post.

What term would you prefer?

[Bonus points: if racism and speciesism were well-known errors of the past, would sexist!you object to the term "sexism" on the same grounds?]

Comment author: Qiaochu_Yuan 29 July 2013 06:52:51PM *  1 point [-]

Humanism, maybe. Yes.

Comment author: MugaSofer 29 July 2013 10:15:47PM *  -2 points [-]

That's taken, though ... but then it's been taken before, and repurposed, it's such a catchy word with such lovely connotations.

Comment author: Xodarap 30 July 2013 12:23:18PM 2 points [-]

It's not sexist to say that women are more likely to get breast cancer. This is a differentiation based on sex, but it's empirically founded, so not sexist.

Similarly, we could say that ants' behavior doesn't appear to be affected by narcotics, so we should discount the possibility of their suffering. This is a judgement based on species, but is empirically founded, so not speciesist.

Things only become _ist if you say "I have no evidence to support my view, but consider X to be less worthy solely because they aren't in my race/class/sex/species."

I genuinely don't think anyone on LW thinks speciesism is OK.

Comment author: SaidAchmiz 30 July 2013 01:14:05PM 6 points [-]

You evade the issue, I think. It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."?

Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.)

No one is saying "I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?"

We have tons of empirical data about differences between the species. The argument is about exactly which of the differences matter, and that is unlikely to be settled by passing the buck to empiricism.

Comment author: AndHisHorse 30 July 2013 01:18:30PM 0 points [-]

The issue, though, is not that beliefs are founded on no evidence. Rather, it is that they are founded on insufficient evidence. It would, in my estimation, require some strange, inhuman bigot to say such a thing; rather, they will hold up their prejudices based on evidence which sounds entirely reasonable to them. There is nearly always a justification for treating the other tribe poorly; healthy human psychology doesn't do well with baseless discrimination, so it invents (more accurately, seeks out with a hefty does of confirmation bias) reasons that its discrimination is well-founded.

In this case, the fact that ants do not appear to be affected by narcotics is evidence that they are different from humans, but it seems that it is insufficient to discount their suffering. I am very curious, however, as to why a lack of behavioral reaction to narcotics indicates that ant suffering is morally neutral. I feel that there is an implicit step I missed there.

Comment author: Lumifer 30 July 2013 06:23:47PM 5 points [-]

I genuinely don't think anyone on LW thinks speciesism is OK.

Ah, the slaying of a beautiful hypothesis by one little ugly fact... :-D

I do feel speciesism is perfectly fine.

Comment author: pianoforte611 29 July 2013 04:08:54AM *  1 point [-]

Hmm, maybe I didn't read the argument carefully enough, but it seems that the argument from marginal cases proves too much ., non-US citizens should be allowed to serve in the army, some people without medical licenses should be allowed to practice as surgeons and many more things.

Comment author: Lukas_Gloor 29 July 2013 04:58:51AM *  3 points [-]

This would be mixing up the normative level with the empirical level. The argument from marginal cases seeks to establish that we have reasons against treating beings of different species differently, all else being equal. Under consequentialism, the best path of action (including motives, laws, societal norms to promote and so on) would already be specified. It would be misleading to apply the same basic moral reasoning again on the empirical level where we have institutions like the US army or the establishment of surgeons. Institutions like the US army are (for most people anyway and outside of political philosophy) not terminal values. Whether it increases overall utility if we enforce "non-discrimination" radically in all domains is an empirical question determined by the higher order goal of achieving as much utility as possible.

And whenever this is not the case (which it may well be, since there is no reason to assume that the empirical level perfectly mirrors the normative one), then "all else" is not equal. Because it might not be overall beneficial for society / in terms of your terminal values, it could be a bad idea to allow an otherwise well-qualified someone without a medical license to practice as a surgeon. There might be negative side-effects of such a practice.

A practical example of this would be animal testing. If enough people were consequentialists and unbiased, we could experiment on humans and thereby accelerate scientific progress. However, if you try to do this in the real world, there is the danger that it will go wrong because people lose track of altruistic goals and replace them with other things (altough this argument applies to animal testing as well almost as much), and there is a big likelihood of starting a civil war or worse if someone would actually start experimenting on humans (this one doesn't). So even though experimenting on animals is intrinsically on par with experimenting on humans with similar cognitive capacities, only the former even stands a chance at increasing overall utility rather than decreasing it. Here the indirect consequences are decisive.

(Edit: In this sense, my example about men and a right to abortion was misleading, because that would of course be a legal right, where empirical factors come into play. But I was using the example to show that being against some form of discrimination doesn't mean that all differences between beings ought to be ignored.)

Comment author: pianoforte611 29 July 2013 06:43:05PM 1 point [-]

Thank you for the response, I think I get the argument now.

I don't have a good answer for why we allow animal testing but not human testing. If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them. While the idea does make me uncomfortable I think I would bite that bullet.

Comment author: wedrifid 29 July 2013 07:22:10AM 2 points [-]

but it seems that the argument from marginal cases proves too much . It proves that non-US citizens should be allowed to serve in the US army,

The argument from marginal cases may well prove too much, but this strikes me as a failed counter-example. Using non-citizens as part of a military force is a reasonably standard practice. Depending on the circumstances it can be the smart thing to do. (Conscripting citizens as cannon fodder tends to promote civil unrest.)

Comment author: pianoforte611 29 July 2013 11:53:53AM *  2 points [-]

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

Comment author: SaidAchmiz 29 July 2013 01:17:26PM 1 point [-]

This seems like a reasonable thing to prove!

Comment author: MugaSofer 29 July 2013 03:36:41PM *  3 points [-]

... what makes you think that's wrong? I remember being twelve, seems to me basing that sort of thing on numerical age is fairly daft, albeit relatively simple.

Comment author: Lukas_Gloor 29 July 2013 03:43:10PM 2 points [-]

Indeed, I wouldn't object to this directly. One could however argue that it is bad for indirect reasons. It would acquire huge administrative efforts to test teens for their competence at voting, and the money and resources might be better spent on education or the US army (jk). In order to save administrative costs, using a Schelling point at the age of, say, 18, makes perfect sense, even though there certainly is no magical change taking place in people's brains the night of their 18th birthday.

Comment author: DanArmak 30 July 2013 09:04:22PM *  4 points [-]

It would acquire huge administrative efforts to test teens for their competence at voting

(You meant require, not acquire)

It would also require huge administrative efforts to test 18-year-olds for competence. So we simply don't, and let them vote anyway. It's not clear to me that letting all 12-year-olds vote is so much terribly worse. They mostly differ from adults on age-relevant issues: they would probably vote school children more rights.

It may or may not be somewhat worse than the status quo, but (for comparison) we don't take away the vote from all convicted criminals, or all demented people, or all people with IQ below 60... Not giving teenagers civil rights is just a historical fact, like sexism and racism. It doesn't have a moral rationale, only rationalizations.

Comment author: wedrifid 29 July 2013 03:54:08PM 4 points [-]

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

This slippery scope really isn't sounding all that bad...

Comment author: Nick_Beckstead 29 July 2013 08:39:54AM *  2 points [-]

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

This objection doesn't work if you rigidify over the beings you feel sympathy toward in the actual world, given your present mental capacities. And that is clearly the best version of this view, and the one that people probably really mean when they say this. On this version of the view, you don't say that if you didn't care about humans, human's wouldn't matter. You do have to say, "If it actually turns out that I don't care about humans, then humans don't matter." Of course, you might want to change the view if things (very unexpectedly!) don't turn out that way.

I don't think this version gives animals no weight, but I think it typically gives animals less weight than humans. (Disclaimer that should be unnecessary: I recognize that there are other objections to H. It is not necessary to respond to what I have said by raising a distinct objection to H.)

Comment author: MrMind 29 July 2013 10:44:00AM 0 points [-]

The claim is that there is no way to block this conclusion without:

  1. using reasoning that could analogically be used to justify racism or sexism or
  2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

But, on the other side, there's no way to reinforce the argument to prevent it from going the other extreme: what negates the interpretation of an amoeba retracting from a probe to call it "pain"? It is just the anatomical quality of the nerves involved or is the computation itself that matters? In either case, the argument is doomed.
The main problem to me it seems that caring as a basis for a moral argument is really not apt to be captured by a real number.

Comment author: Lukas_Gloor 29 July 2013 01:45:58PM 2 points [-]

I edited the very end of my post to account for this. I think the question whether a given organism is sentient is an empirical question, i.e. one that we can unambiguously figure out with enough knowledge and computing power. Some people do disagree with that and in this case, things would become more complicated

Comment author: Larks 29 July 2013 12:59:35PM 1 point [-]

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

In the past, the arguments against sexism and racism were things like "they're human too", "they can write poetry too", "God made all men equal" and "look how good they are at being governesses". None of these apply to animals; they're not human, they don't write human, God made them to serve us, and they're not very good governesses. Indeed, you seem to think all these are irrelevant criteria.

Speaking as a 21st century person in a liberal, western country, I believe sexism and racism are wrong basically because other people told me they were, who believed that because ... who believed that because they were convinced by argumentum ad governess. But now I've just discovered that argumentum ad governess is invalid. Should I not withdraw my belief that sexism and racism are wrong, which apparently I have in some sense been fooled into, and adopt the traditional, time-honoured view that they are not?

Comment author: Zvi 29 July 2013 01:20:15PM 0 points [-]

Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

Since bad thing is bad, and you say it is in some situation justified, clearly you are wrong, with the (reasonably explicit) accusation that if you use this line of reasoning you are (sexist! racist! in favor of killing babies! in favor of genocide! or worse, not being properly rational!)

Comment author: Lukas_Gloor 29 July 2013 01:38:11PM *  3 points [-]

That's common practice in ethics.

You need something to work with otherwise ethical reasoning couldn't get off the ground. But it doesn't necessarily imply that people are not being properly rational (irrational would have to be defined according to a goal, and ethics is about goals.)

Comment author: Zvi 29 July 2013 02:17:16PM 1 point [-]

One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics? If this is true, does the fact that it is standard practice justify it, and if so what determines what is and isn't justified by an appeal to standard practice?

Refuting counter-argument X by saying that if X was your full set of ethical principles you would reach repugnant conclusion Y is at its strongest an argument that X is not a complete and fully satisfactory set of ethical principles. I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.

In addition, when we use an argument of the form "X leads to some conclusion Y for which Y can be considered a subset of Z, and all Z are bad" we imply that for all such Z, you can (even in theory) create an internally consistent ethical system, even in theory, where for any given principle set P such that P is under some circumstance leading to an action in some such set Z, P is wrong. I would claim that if you include all your examples of such Z, it is fairly easy to construct situations such that the sets Z contain all possible actions and thus all ethical systems P, which would imply no such ethical systems can exist, and if you well-define all your terms, I would be happy to attempt to construct such a scenario.

Comment author: Lukas_Gloor 29 July 2013 02:53:14PM 2 points [-]

Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

I don't think this form of argument is mindkilling. "Bad thing" needs to refer to something the person whose position you're criticizing considers unacceptable too. You'd be working with their own intuitions and assumptions. So I'm not advocating begging the question by postulating that some things are bad tout court (that would be mindkilling indeed).

One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics?

The first one is just a description of the most common ethical methodology. The other papers I'm linking too are excellent, with the exception of the third one which I do consider to be rather weak. But these are all great papers that use the procedure I quoted from you.

I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.

This doesn't necessarily follow, but if I discover that the set of principles I endorse lead to conclusions I definitely do not endorse, then I have reasons to fundamentally question some of the original principles. I could also go for modifications that leave the overall construct intact, but that usually comes with problems as well.

I'm not sure whether I understand your last paragraph. It seems like you're talking about impossibility theorems. This has indeed been done, for instance for population ethics (the second paper I linked to above). There are two ways to react to this: 1) Giving up, or 2) reconsidering which conclusions go under Z. Personally I think the second option makes more sense.

Comment author: [deleted] 29 July 2013 02:03:43PM 1 point [-]

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it.

If there were no intrinsic reasons for a feather to fall slower than a rock, then in a vacuum a feather would fall just as fast as a rock as long as there's no air. But you don't neglect the viscosity of air when designing a parachute.

Comment author: [deleted] 29 July 2013 02:07:40PM 4 points [-]

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

People get anaesthesia before undergoing surgery and get drunk before risking social embarrassment all the time.

Comment author: Lukas_Gloor 29 July 2013 02:12:36PM *  5 points [-]

Animals are not walking around anaesthetized, and I don't think the primary reason why alcohol helps with pain is that it makes you dumber (I might be wrong about this).

Comment author: Qiaochu_Yuan 29 July 2013 07:08:26PM *  7 points [-]

Also, standard argument against a short, reasonable-looking list of ethical criteria: no such list will capture complexity of value. They constitute fake utility functions.

Comment author: Lukas_Gloor 30 July 2013 03:02:23AM 0 points [-]

My utility function feels quite real to me and I prefer simplicity and elegance over complexity. Besides, I think you can still have lots of terminal values and not discriminate against animals (in terms of suffering), I don't think that's mutually exclusive.

Comment author: Lumifer 29 July 2013 08:14:27PM 10 points [-]

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd!

That's a common fallacy. Let me illustrate:

The notions of hot and cold water are nonsensical. The water temperature is continuous from 0C to 100C. How would you divide this into distinct areas? You would have to draw a line between neighboring values different by tiny fractions of a degree, but that seems absurd!

Comment author: Lukas_Gloor 30 July 2013 02:59:39AM *  3 points [-]

I'm not the one arguing for dividing this up into distinct areas, my whole point was to just look at the relevant criteria and nothing else. If the relevant criterion is temperature, you get a gradual scale for your example. If it is sentience, you have too look for each individual animal separately and ignore species boundaries for that.

Comment author: Lumifer 30 July 2013 04:20:35AM 0 points [-]

I'm not the one arguing for dividing this up into distinct areas

Right, you're the one arguing for complete continuity in the species space and lack of boundaries between species. Similar to the lack of boundary between cold and hot water.

you have too look for each individual animal separately and ignore species boundaries for that.

I'm confused. You seem to think it's useful to sit by an anthill and test each individual ant for sentience..?

Comment author: drnickbone 30 July 2013 06:04:23PM 3 points [-]

For a morally relevant example, it is quite absurd to suppose that humans aged 18 years and 0 days are mature enough to vote, whereas humans aged 17 years and 364 days are not mature enough. So voting ages are morally unacceptable?

Ditto: ages for drinking alcohol, sexual consent, marriage, joining the armed services etc.

Comment author: Armok_GoB 29 July 2013 09:47:51PM 5 points [-]

DISCLAIMER: the following is not necessarily my own opinions or beliefs, but rather done more in the spirit of steelmaning:

There seems to be a number of signs that the deciding factor might be the ability to form long term memories, especially if we go into very near mode.

  • It seems that if we extrapolate volition for an individual that is made to suffer with or without memory blocking in various sequences, and allowing it to chose tradeofs, it'll repeatedly observe clicking a button labelled "suffer horrific torture with suppressed memory" followed blacking out, and clicking a button labelled "suffer average torture with functioning memory" followed by being tortured. It'd thus learn to value experiences without memory much less.

  • If I remember correctly some anaesthetics used for surgery basically paralyses you and disable memory formation, and this is not seen as an outrage or horrifying, even by those that have or will be experiencing it.

  • If we consider increasing the intelligence of various animals while directing them to become humanlike, then by empathic modelling it seems that those capable of forming long term memories beforehand would identify with their former selves, get angry at people who had harmed them, empathize strongly and prevent the suffering in beings similar to what they were before, etc. while for those that couldn't, the opposite of these things would be true.

  • If I am given the choice to have one type of cognitive functionality disabled before being tortured, in almost all circumstances it seems the ability to form long term memories would be the best choice.