Arguments Against Speciesism

28 Post author: Lukas_Gloor 28 July 2013 06:24PM

There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

Comments (474)

Comment author: CarlShulman 28 July 2013 10:14:23PM *  28 points [-]

I agree that species membership as such is irrelevant, although it is in practice an extremely powerful summary piece of information about a creature's capabilities, psychology, relationship with moral agents, ability to contribute to society, responsiveness in productivity to expected future conditions, etc.

Animal happiness is good, and animal pain is bad. However, the word anti-speciesism, and some of your discussion, suggests treating experience as binary and ignoring quantitative differences, e.g. here:

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments.

This leaves out the idea of the quantity of experience. In human split-brain patients the hemispheres can experience and act quite independently without common knowledge or communication. Unless you think that the quantity of happiness or suffering doubles when the corpus callosum is cut, then happiness and pain can occur in substructures of brains, not just whole brains. And if intensive communication and coordination were enough to diminish moral value why does this not apply to social groups like firms, herds, flocks, hives and the like?

Animals vary enormously in the number of neurons and substructures, including ones engaged in reinforcement learning responsive to pleasure and pain. For example, a fly's brain contains 100,000 neurons, where a human's contains about a million times as many. Here are brain masses for some animals:

  • Adult elephants at around 5000 g
  • Adult humans 1300-1400g
  • Chimpanzees are about 420 g, about a 3:1 ratio with humans, with the ratio for cortex neurons around 3:1 to 4:1
  • Cows are 425-458g, about a 3:1 ratio; if their cortex neuron counts resemble horses that would be closer to 8:1
  • Pigs are at 180 g, a ratio of 7.5:1
  • Domestic cats stand at 25-30 g, ~50:1 with the cortex ratio somewhat bigger
  • Pekin Duck at 6.3 g, 214:1 ratio
  • Owl brains are 2.2 g, around 600:1, and European quail at 0.9 g, about 1500:1
  • Goldfish have 0.097 g, just under a 14,000:1 ratio

Particularly for birds, fish, and insects one sees extremely large ratios. If, as is quite plausible in light of the decentralized operations of brains (stunningly demonstrated in split-brain patients, but also a routine feature of information processing in nervous systems), smaller subsystems can experience pleasure and pain, then animals with large nervous systems may be orders of magnitude more important than one would otherwise think. Importantly, this is not a consideration lowering the expected experience of animals with small nervous systems, but increasing the expected experience of animals with large nervous systems, so it does not need to be held with very high confidence to much affect behavior: "what if small neural systems suffer and delight?" is analogous to "what if snails sufffer and delight?").

Would you say that making such adjustments is speciesist? For example wikipedia gives the world chicken population as 24 billion, mostly kept in horrible conditions, and 1.3 billion cows. If one ignores nervous system scale the welfare of the chickens dominates in importance, but if one thinks that quantity of experience scales then the aggregate welfare of the cows looms larger. Is it speciesist to prioritize cows over chickens or fish on this basis?

Comment author: Lukas_Gloor 28 July 2013 10:29:26PM 13 points [-]

I fully agree with this point you make, I should have mentioned this. I think "probabilistic discounting" should refer to both "probability of being sentient" and "intensity of experiences given sentient". I'm not convinced that (relative) brain size makes a difference in this regard, but I certainly wouldn't rule it out, so this indeed factors in probabilistically and I don't consider this to be speciesist.

Comment author: Xodarap 31 July 2013 12:13:34AM 1 point [-]

Note that by this measure, ants are six times more important than humans.

But to address your question: "specieism" is not a label that's slapped on people who disagree with you. It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are, and this bias prevents them from updating their beliefs in light of new evidence."

Brain-to-body quotient is one type of evidence we should consider, but it's not a great one. The encephalization quotient improves on it slightly by considering the non-linearity of body size, but there are many other metrics which are probably more relevant.

Comment author: CarlShulman 31 July 2013 01:34:54AM *  5 points [-]

Note that by this measure, ants are six times more important than humans.

You linked to a page comparing brain-to-body-weight ratios, rather than any absolute features of the brain, and referring not to ants in general but to unusually miniaturized ants in which the rest of the body is shrunken. That seems pretty irrelevant.

Brain-to-body quotient is one type of evidence we should consider, but it's not a great one.

I was using total brain mass and neuron count, not brain-to-body-mass.

but there are many other metrics which are probably more relevant.

I agree these are relevant evidence about quality of experience, and whether to attribute experience at all. But I would say that quality and quantity of experience are distinguishable (although the absence of experience implies quantity 0).

Comment author: Lumifer 31 July 2013 12:34:17AM 1 point [-]

It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are

This statement implies that humans can be more or less special "actually", as if it were a matter of fact, of objective reality.

That is not true, however. Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

Your point is equivalent to saying "many people have a cognitive bias that roses are more 'pretty' than they actually are".

Comment author: Xodarap 31 July 2013 11:46:36AM 2 points [-]

It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

As mentioned in the original post, the same can be said of race: I may subjectively prefer white people.

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

Comment author: Lumifer 31 July 2013 03:21:51PM 7 points [-]

the same can be said of race: I may subjectively prefer white people.

Yes. That's perfectly fine. In fact, if you examine the revealed preferences (e.g. who people prefer to have as their neighbours or who do they prefer to marry) you will see that most people in reality do prefer others of their own race.

And, of course, the same can be said of sex, too. Unless you are an evenhanded bi, you're most certainly guilty of preferring some specific sex (or maybe gender, it varies).

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable

"Morally acceptable" is a judgement, it is conditional on which morality you're using as your standard. Different moralities will produce different moral acceptability for the same actions.

Perhaps you wanted to say "socially acceptable"? In particular, "socially acceptable in contemporary US"? That, of course, is a very different thing.

I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

Sigh. This is a rationality forum, no? And you're using emotionally charged guilt-by-association arguments? (it's actually designed guilt-by-association since the word "speciesism" was explicitly coined to resemble "racism", etc.).

Warning: HERE BE MIND-KILLERS!

Comment author: davidpearce 31 July 2013 08:22:28PM 2 points [-]

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

Comment author: NotInventedHere 31 July 2013 08:31:22PM 4 points [-]

I'm fairly sure it's for the examples referencing the politically charged issues of racism and sexism.

Comment author: wedrifid 02 August 2013 03:02:00AM 0 points [-]

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters?

It can be levelled at most people who use employ either of those terms.

Comment author: Xodarap 01 August 2013 01:46:10AM 0 points [-]

I apologize for presenting the argument in a way that's difficult to understand. Here are the facts:

  1. If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable
  2. We* don't believe that sexism, racism, etc. are acceptable
  3. Therefore, we cannot accept arguments based on subjective opinions

Is there a better way to phrase this?

(* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)

Comment author: SaidAchmiz 01 August 2013 02:50:51AM *  3 points [-]

Y'got some... logical problems going on, there.

Firstly, your (1), while true, is misleading; it should read "If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that [long, LONG, probably literally infinite list of possible views, of which sexism and racism may be members but which contains innumerably more other stuff] are morally acceptable". Sure, accepting beliefs without evidence may lead us to sexism and/or racism, but that's hardly our biggest problem at that point.

Secondly, you presuppose that sexism and racism are necessarily not based on evidence. Of course, you may say that sexism and racism are by definition not based on evidence, because if there's evidence, then it's not sexist/racist, but that would be one of those "37 Ways That Bad Stuff Can Happen" or what have you; most people, after all, do not use your definition of "sexist" or "racist"; the common definition takes no notice of whether there's evidence or not.

Thirdly, for every modus ponens there is a modus tollens — and, as in this case, vice versa: we could decide that "subjective" opinions not based on evidence are morally acceptable (after all, we're not talking about empirical matters, right? These are moral positions). This, by your (1) and modus ponens, would lead us to accept sexism and racism. Intended? Or no?

Finally — and this is the big one — it strikes me as fundamentally backwards to start from broad moral positions, and reason from them to a decision about whether we need evidence for our moral positions.

Comment author: Jiro 01 August 2013 03:08:58AM *  3 points [-]

There's a bigger logical flaw: "belief that subjective opinions not based on evidence are acceptable" is an ambiguous English phrase. It can mean belief that:

1) if X is a subjective opinion, then X is acceptable.

2) there exists at least one X such that X is a subjective opinion and is acceptable

Needless to say, the argument depends on it being #1, while most people who would say such a thing would mean #2.

I believe that hairdryers are for sale at Wal-Mart. That doesn't mean that every hairdryer in existence is for sale at Wal-Mart.

Comment author: SaidAchmiz 01 August 2013 03:19:58AM 2 points [-]

Yes, good point — the "some" vs. "all" distinction is being ignored.

Comment author: Vaniver 02 August 2013 03:54:09AM 2 points [-]

(* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)

This is not a core belief of the broader LW community. An actual core belief of the LW community:

That which can be destroyed by the truth should be.

Comment author: wedrifid 02 August 2013 05:20:09AM 0 points [-]

An actual core belief of the LW community:

That which can be destroyed by the truth should be.

I'm not sure that is quite true. It is controversial and many are not comfortable with it without caveats.

Comment author: solipsist 02 August 2013 12:42:44PM 1 point [-]

By the way, thank you for spelling out your position with a clear, valid argument that keeps the conversation moving forward. In the heat of argument we often forget to express our appreciation of well-posed comments.

Comment author: wedrifid 02 August 2013 04:47:49AM 1 point [-]

Here are the facts:

If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable

This does not follow. (It can be repaired by adding an "all" to the antecedent but then then the conclusion in '3' would not follow from 1 and 2.)

Is there a better way to phrase this?

Basically, no. Your argument is irredeemably flawed.

Comment author: Lumifer 01 August 2013 03:48:54PM *  1 point [-]

Here are the facts

You keep using that word. I do not think it means what you think it means.

If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable

That's curious. My and your ideas of morality are radically different. There's even not that much of a common base.

Let me start by re-expressing in my words how do I read your position (so that you could fix my misinterpretations). First, you're using "morally acceptable" without any qualifiers of conditionals. This means that you believe there is One True Morality, the Correct One, on the basis of which we can and should judge actions and opinions. Given your emphasis on "evidence", you also seem to believe that this One True Morality is objective, that is, can be derived from actual reality and proven by facts.

Second, you divide subjective opinions into two classes: "not based on evidence" and, presumably, "based on evidence". Note that this is not at all the same thing as "falsifiable" vs. "non-falsifiable". For example, let's say I try two kinds of wine and declare that I like the second wine better. Is such a subjective opinion "based on evidence"?

You also have major logic problems here (starting with the all/some issue), but it's a mess and I think other comments have addressed it.

To contrast, I'll give a brief outline of how I view morality. I think of morality as a more or less coherent set of values at the core of which is a subset of moral axioms. These moral axioms are certainly not arbitrary -- many factors influence them, the three biggest ones are probably biology, societal/cultural influence, and individual upbringing and history -- but they are not falsifiable. You cannot prove them right or wrong.

Evidence certainly matters, but it matters mostly at the interface of moral values and actions: evidence tells you whether the actual outcomes of your actions match your intent and your values. It is, of course, often the case that they do not. However evidence cannot tell you what you should want or what you should value.

We* don't believe that sexism, racism, etc. are acceptable

Heh. I neither believe you have the power to speak for the entire LW community, nor do I care what you find morally acceptable or unacceptable.

Therefore, we cannot accept arguments based on subjective opinions

As has been noted, your logic is flawed. However the bigger issue is your confusion between arguments and declarative statements (that e.g. reflect personal values). Arguments serve to persuade, to change someone's mind -- subjective opinions do not. If I say I hate tomatoes that's not a reason for you to modify your attitude towards tomatoes, it's just an observation about myself. I am not sure what do you mean by "accepting" it.

Comment author: wedrifid 02 August 2013 04:40:22AM 0 points [-]

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

This does not follow.

Comment author: Vaniver 02 August 2013 03:56:57AM 0 points [-]

Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it.

The local explanation of this concept is the 2-place word, which I rather like.

Comment author: Armok_GoB 29 July 2013 09:47:51PM 5 points [-]

DISCLAIMER: the following is not necessarily my own opinions or beliefs, but rather done more in the spirit of steelmaning:

There seems to be a number of signs that the deciding factor might be the ability to form long term memories, especially if we go into very near mode.

  • It seems that if we extrapolate volition for an individual that is made to suffer with or without memory blocking in various sequences, and allowing it to chose tradeofs, it'll repeatedly observe clicking a button labelled "suffer horrific torture with suppressed memory" followed blacking out, and clicking a button labelled "suffer average torture with functioning memory" followed by being tortured. It'd thus learn to value experiences without memory much less.

  • If I remember correctly some anaesthetics used for surgery basically paralyses you and disable memory formation, and this is not seen as an outrage or horrifying, even by those that have or will be experiencing it.

  • If we consider increasing the intelligence of various animals while directing them to become humanlike, then by empathic modelling it seems that those capable of forming long term memories beforehand would identify with their former selves, get angry at people who had harmed them, empathize strongly and prevent the suffering in beings similar to what they were before, etc. while for those that couldn't, the opposite of these things would be true.

  • If I am given the choice to have one type of cognitive functionality disabled before being tortured, in almost all circumstances it seems the ability to form long term memories would be the best choice.

Comment author: Allison_Smith 31 July 2013 04:41:53AM 1 point [-]

some anaesthetics used for surgery basically paralyses you and disable memory formation

Without also functioning as pain control, or in addition to that role? In either case, I'd be interested to know which anaesthetics these are; it seems like there might be interesting literature on them. (For instance, I'm curious to know whether they are first-line choices, or just used when there is no viable alternative.)

Comment author: gwern 31 July 2013 09:28:44PM 4 points [-]
Comment author: Armok_GoB 31 July 2013 09:07:59PM 0 points [-]

I don't know, if you find out please tell me.

Comment author: DanArmak 28 July 2013 10:22:30PM *  5 points [-]

While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I'm posting mine too. (Writing quickly is hard!)

In practice we must have a quantitative model of how much "moral value" to assign an animal (or human). I think your position that:

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

Is wrong, and the reasons for that fall out of your own arguments.

As you point out, there is a continuum between any two living things (common descent). Nevertheless we all think that at least some animals have zero, or nearly zero, moral weight: insects, perhaps, but you can go all the way to amoebas. You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option. Similar arguments have of course been made about the continuum between a sperm and an egg, and an eventual human being.

Option 1 lets you assign non-human animals moral value. But then, you must specify the criteria you use to calculate that value, from your list A-G or otherwise. These same criteria will then tell you that some humans have less moral value then others: children, people with advanced dementia or other severe mental deficiencies, etc. Some biological humans may have much less value than, say, a chicken (babies), or none at all (fetuses). Also, at least some post-humans, aliens, and AIs would have far more moral value than any human - even to the point of becoming utility monsters for total utilitarians.

Option 2 is completely arbitrary in terms of what animals you value, so (among its other problems) people won't be able to agree about it. And if you don't determine moral value by measuring some underlying property, you won't be able to determine the value of radical new varieties, such as post-humans or AIs.

You seem to support option 2 (value everyone equally) but you don't say where you draw the line - and that's the crucial question.

My own position is option 1, open to modification against failure modes like utility monsters that would conflict too strongly with my other moral intuitions.

The claim is that there is no way to block this conclusion without: 1. using reasoning that could analogically be used to justify racism or sexism or 2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

My reasoning can't justify racism and sexism, because my moral criteria don't differ noticeably between sexes and races. This is an empirical fact. If it were true that e.g. some race was less sentient than other races, then that would be a valid reason to assign people of that race less moral value. But it's just not true.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway? Your utility function can't be separate from your morals; on the contrary it must incorporate your morals. (Inconsistent morals are a problem, but without a single VNM-compliant utility function, utilitarianism can't tell you anything at all.)

Some other notes:

H: What I care about / feel sympathy or loyalty towards

I would like to note that this is actual basis of almost all human moral reasoning, and all the rest is post-facto rationalizations. When those rationalizations come in conflict with moral intuitions, they are labelled "repugnant conclusions". I think you dismiss this factor far too lightly.

those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering.

I am willing to bite the bullet about babies, quite easily in fact. I assign no more value to newborn human babies than I do to chickens. I only care about babies insofar other humans care about babies.

I do care about animal suffering - in proportion to some of the measures A-G on your list, so less than human suffering, but (for many animals) more than human baby suffering.

I wouldn't mind treating babies like we treat some farm animals; that is not because I value those animals as highly as I do humans, but because I value both babies and humans much less than I do adult humans. (Some farming methods are acceptable to me, and some are not.)

A sentient being is one for whom "it feels like something to be that being".

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

Comment author: Lukas_Gloor 28 July 2013 10:44:56PM *  6 points [-]

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly.

You must either 1) assign gradually diminishing moral value to beings ranged from humans to amoebas; or 2) choose an arbitrary, precise point (or several points) at which the value decreases sharply, with modern species boundaries being an obvious Schelling point but not the only option.

I'm arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn't matter, some people in fact have this view.

I don't understand what you mean by (2), could you give an example? If a utilitarian calculation forbids you from doing something, then what could be your reason for doing it anyway?

By "utilitarianism" I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because "you'd have to be okay with torturing babies" is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future.

Please play rationalist's taboo here. What empirical test or physical fact tells you whether "it feels like something" to be a certain animal? And moreover, quantitatively so - "how much" it feels like something to be that animal?>>

I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states.

Baby-ism and racism have nothing in common (except that you're against both). I don't assign human-level moral status to babies, but I'm not a racist. This is precisely because humans of all races have roughly the same distribution on A-G (and other relevant parameters), whereas newborn babies would test below adults of most (all?) mammalian species.

I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.

Comment author: katydee 28 July 2013 10:02:46PM 12 points [-]

I would prefer to see posts like this in the Discussion section.

Comment author: Lukas_Gloor 28 July 2013 10:04:43PM 3 points [-]

May I ask why?

Comment author: katydee 28 July 2013 10:35:24PM 5 points [-]

I think Main should be for posts that directly pertain to rationality. This post doesn't seem to do that.

That said, my standards for what belongs in main seem somewhat different from those of other users. For instance I think "The Robots, AI, and Unemployment Anti-FAQ" belongs in Discussion as well, and that post is not only in Main but promoted to boot.

Comment author: Lukas_Gloor 29 July 2013 03:24:10PM *  9 points [-]

Since grandparent received so many upvotes, I'm going to explain my reasoning for posting in Main:

Rules of thumb:

Your post discusses core Less Wrong topics.

The material in your post seems especially important or useful.

[...]

(At least one of) LW's primary goal(s) is to get people thinking about far future scenarios to improve the world. LW is about rationality, but it is also about ethics. Whether anti-speciesism is especially important or useful is something that people have different opinions on, but the question itself is clearly important because it may lead to different/adjusted prioritizing in practice.

Comment author: SaidAchmiz 28 July 2013 10:52:07PM 2 points [-]

Upvoted for the "directly pertain to rationality" rule of thumb; I agree with that. That said, I thought that the Anti-FAQ was appropriate for Main.

Comment author: Larks 29 July 2013 12:49:16PM 2 points [-]

The anti-FAQ was of much higher quality.

Comment author: Vaniver 28 July 2013 09:49:11PM *  12 points [-]

None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.

Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)

(Looking at the comments, Manfred makes a similar argument more vividly over here.)

Comment author: davidpearce 29 July 2013 10:18:42AM 11 points [-]

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

Comment author: Vaniver 29 July 2013 10:51:12AM 3 points [-]

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves?

My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday.

But their lack of cognitive sophistication doesn't make them any less sentient.

Agreed, mostly. (I think it might be meaningful to refer to syntax or math as 'senses' in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)

Comment author: davidpearce 29 July 2013 11:56:58AM 7 points [-]

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

Comment author: Vaniver 29 July 2013 08:37:28PM 1 point [-]

What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

I'm not sure what this would look like, actually. The first thing that comes to mind is Down's Syndrome, but the impression I get is that that's a much smaller reduction in cognitive capacity than the one you're describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down's, and I suspect that the more extreme the reduction, the easier it would be to come to that direction.

I hope you don't mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don't think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.

Comment author: Lukas_Gloor 28 July 2013 09:56:28PM *  10 points [-]

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two". I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn't considered.

Comment author: Vaniver 29 July 2013 01:59:39AM 7 points [-]

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well.

And then arguments A through E will not argue for treating the enhanced animals differently from humans.

And the argument from potentiality would also prohibit abortion or experimentation on embryos.

It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.

I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two".

I think this is a hazard for any "Arguments against X" post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.

Comment author: threewestwinds 30 July 2013 01:29:37AM *  1 point [-]

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).

Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.

Comment author: Vaniver 30 July 2013 01:57:14AM 1 point [-]

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.

Comment author: DxE 29 July 2013 09:09:14PM 4 points [-]

My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.

Comment author: Vaniver 29 July 2013 11:58:58PM 1 point [-]

My sperm has the potential to become human.

It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don't turn into people.

Comment author: Lukas_Gloor 30 July 2013 12:11:28AM 4 points [-]

Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?

Comment author: dspeyer 05 August 2013 02:29:31AM 0 points [-]

Doesn't our current cloning technology allow us to turn any ordinary cell into a baby, albeit one with aging-related diseases?

Comment author: Xodarap 28 July 2013 10:02:00PM 2 points [-]

Is it possible to create some rule like this? Yeah, sure.

The problem is that you have to explain why that rule is valid.

If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it's not clear why since their phenomenal pain is identical.

Comment author: Vaniver 29 July 2013 12:06:00AM 3 points [-]

The problem is that you have to explain why that rule is valid.

It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).

These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.

Comment author: MugaSofer 29 July 2013 03:57:38PM 4 points [-]

What about a similar gradual rule for varying sentience levels of animal?

Comment author: Vaniver 29 July 2013 08:40:01PM 1 point [-]

What about a similar gradual rule for varying sentience levels of animal?

A quantitative measure of sentience seems much more reasonable than a binary measure. I'm not a biologist, though, and so don't have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from 'doesn't have a central nervous system' to 'beyond humans' to be possible, but don't know if there are bands that aren't occupied for various practical reasons.

Comment author: OnTheOtherHandle 31 July 2013 08:27:44PM 0 points [-]

While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don't want to let absolutely everyone use their own discretion in difficult situations.

Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that's actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.

Comment author: DanArmak 28 July 2013 10:30:44PM 1 point [-]

Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.

Comment author: jkaufman 28 July 2013 08:30:16PM *  19 points [-]

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The "why species membership really is an absurd criterion" section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)

Comment author: Xodarap 28 July 2013 08:48:03PM *  5 points [-]

I wasn't able to glean this from your other article either, so I apologize if you've said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don't care about their suffering?

(And in either case, why?)

Comment author: jkaufman 28 July 2013 08:52:51PM *  3 points [-]

I think suffering is qualitatively different when it's accompanied by some combination I don't fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering is morally relevant.

Comment author: davidpearce 28 July 2013 11:15:59PM *  17 points [-]

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

Comment author: Lukas_Gloor 28 July 2013 09:12:15PM 14 points [-]

How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn't you at least attribute some amount of concern for sentient beings that lack self-awareness?

Comment author: thebestwecan 11 June 2014 12:54:38AM 0 points [-]

I second this. Really not sure what justifies such confidence.

Comment author: Xodarap 28 July 2013 09:36:36PM 1 point [-]

It strikes me that the only "disagreement" you have with the OP is that your reasoning isn't completely spelled out.

If you said, for example, "I don't believe pigs' suffering matters as much because they don't show long-term behavior modifications as a result of painful stimuli" that wouldn't be a speciesist remark. (It might be factually wrong, though.)

Comment author: Emile 30 July 2013 09:10:22PM 0 points [-]

So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering.

There's missing something at the end, like "... is morally relevant", right?

Comment author: jkaufman 31 July 2013 02:14:51AM 0 points [-]

Fixed; thanks!

Comment author: Lukas_Gloor 28 July 2013 08:50:09PM *  12 points [-]

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

Comment author: jkaufman 28 July 2013 09:12:04PM 5 points [-]

I'm assuming that you think human infants can suffer

I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.

Comment author: atucker 29 July 2013 03:41:50AM 3 points [-]

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Comment author: threewestwinds 30 July 2013 01:51:04AM 1 point [-]

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat.

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

Comment author: Jiro 30 July 2013 03:17:36AM *  1 point [-]

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale.

By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses.

Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.

Comment author: threewestwinds 31 July 2013 07:05:03PM 0 points [-]

As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior.

Let's enumerate the costs, rather than just saying "there are costs."

  • Money wise, you save or break even.
  • It has no time cost in much of the US (most restaurants have vegetarian options).
  • The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive).
  • The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

Comment author: SaidAchmiz 31 July 2013 11:44:01PM 2 points [-]

Money wise, you save or break even.

This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money.

It has no time cost in much of the US (most restaurants have vegetarian options).

Do you eat at restaurants all the time? Learning how to cook the new things you're now eating instead of meat is a time cost.

Also, there are costs you don't mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.

Comment author: Jiro 31 July 2013 10:00:06PM *  2 points [-]

it can be applied to every change one might possibly make in one's behavior.

And it would be correct to deny that a change that would possibly be made to one's behavior is "such a cheap change" that we don't need to weigh the cost of the change very much.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it's not all that calculable.

Comment author: Jabberslythe 28 July 2013 08:38:54PM 4 points [-]

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

What if you were killed immediately afterwards, so long term memories wouldn't come into play?

Comment author: jkaufman 28 July 2013 08:48:19PM *  2 points [-]

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

What if you were killed immediately afterwards

If you offered me the choice between:

A) 50% chance you are tortured and then released, 50% chance you are killed immediately

B) 50% chance you are tortured and then killed, 50% chance you are released immediately

I would strongly prefer B. Is that what you're asking?

Comment author: Estarlio 29 July 2013 12:14:03AM 2 points [-]

How do you avoid it being kosher to kill you when you're asleep - and thus unable to perform at your usual level of consciousness - if you don't endorse some version of the potential principle?

If you were to sleep and never wake, then it wouldn't necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.

Comment author: jkaufman 29 July 2013 02:24:08AM *  5 points [-]

Killing me when I'm asleep is wrong for the same reason as killing me instantly and painlessly when I'm awake is wrong. Both ways I don't get to continue living this life that I enjoy.

(I'm not as anti-death as some people here.)

Comment author: shminux 28 July 2013 08:28:20PM *  12 points [-]

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.

So a consequentialist question would be something like

Where does it make sense to put a boundary between caring and not caring, under what circumstances and for how long?

Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.

Comment author: Ruairi 28 July 2013 08:45:16PM *  9 points [-]

I feel like you're saying this:

"There are a great many sentient organisms, so we should discriminate against some of them"

Is this what you're saying?

EDIT: Sorry, I don't mean that bacteria or viruses are sentient. Still, my original question stands.

Comment author: shminux 28 July 2013 09:31:20PM 2 points [-]

All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.

Comment author: Ruairi 28 July 2013 10:23:11PM 5 points [-]

Where does sentience fail as a boundary?

Comment author: RomeoStevens 29 July 2013 09:33:40AM 2 points [-]

if sentience isn't a boolean condition.

Comment author: Xodarap 28 July 2013 08:42:26PM *  5 points [-]

you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Why do you say that? Bacteria, viruses etc. seem to lack not just one, but all of the capacities A-H the OP mentioned.

Comment author: SaidAchmiz 28 July 2013 09:07:58PM 1 point [-]

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Indeed. I've alluded to this before as "how many chickens would I kill/torture to save my grandmother?" The answer, of course, is N, where N may be any number.

This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:

  1. Additive aggregation of value.
  2. Valuing my grandmother a finite amount (as opposed to an infinite amount).
  3. Valuing a chicken a nonzero amount.

Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway... but it also leads to problems (don't I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).

Throwing out #3 seems unproblematic.

Comment author: Vaniver 28 July 2013 09:58:42PM *  6 points [-]

The answer, of course, is N, where N may be any number. ... Throwing out #3 seems unproblematic.

Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don't have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don't expect my intuitions to give good answers in that region. If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive.

So it looks like an answer to the 'save' question that avoids the incorrect results is something like "I don't know how many, but I'm pretty sure it's more than a million."

Comment author: shminux 28 July 2013 09:14:30PM 4 points [-]

Throwing out #3 seems unproblematic.

It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.

Comment author: Xodarap 28 July 2013 09:39:26PM *  1 point [-]

The problem with throwing out #3 is you also have to throw out:

(4) How we value a being's moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Which is a rather nice proposition.

Edit: As Said points out, this should be:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Comment author: Pablo_Stafforini 28 July 2013 06:51:59PM *  12 points [-]

A fine piece. I hope it triggers a high-quality, non-mindkilled debate about these important issues. Discussion about the ethical status of non-human animals has generally been quite heated in the past, though happily this trend seems to have reversed recently (see posts by Peter Hurford and Jeff Kaufman).

Comment author: Ruairi 28 July 2013 08:36:13PM *  14 points [-]

"If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist."

David Pearce sums up antispeciesism excellently saying:

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Comment author: CarlShulman 29 July 2013 10:33:29AM 7 points [-]

sums up antispeciesism excellently saying: "The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

If one takes "other things being equal" very seriously that could be quite vacuous, since there are so many differences in other areas, e.g. impact on society and flow-through effects, responsiveness of behavior to expected treatment, reciprocity, past agreements, social connectedness, preferences, objective list welfare, even species itself...

The substance of the claim has to be about exactly which things need to be held equal, and which can freely vary without affecting desert.

Comment author: Larks 31 July 2013 12:25:22PM 4 points [-]

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Any speciesist is happy to agree with that. She simply thinks that species is one of the things that has to be equal.

Comment author: davidpearce 31 July 2013 01:21:46PM *  2 points [-]

Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. http://www.dailymail.co.uk/news/article-2226647/Nickolas-Coke-Boy-born-brain-dies-3-year-miracle-life.html :heart-warming yes, but irrational altruism - by antispeciesist criteria at any rate.) I try and say a bit more (without citing the Daily Mail) here: http://ieet.org/index.php/IEET/more/pearce20130726

Comment author: Larks 31 July 2013 03:34:55PM 1 point [-]

I don't see how this is relevant to my argument. I'm just pointing out that your definition doesn't track the concept you (probably) have in mind; I wasn't saying anything empirical* at all.

*other than about the topology of concept-space.

Comment author: davidpearce 31 July 2013 04:21:42PM 2 points [-]

Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.

Comment author: Larks 01 August 2013 11:52:19AM 0 points [-]

If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.

Comment author: RichardKennaway 01 August 2013 12:40:59PM 2 points [-]

If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.

There are plenty of people who would disagree with that. But what do you mean by "respect", and on what grounds do you give it or withhold it?

Comment author: RichardKennaway 31 July 2013 01:08:30PM 1 point [-]

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

Surely the antispeciesist claims that nothing else needs to be equal?

Comment author: SaidAchmiz 31 July 2013 12:58:19PM 1 point [-]

By the way... what the heck is "equivalent sentience", exactly?

Comment author: Kawoomba 28 July 2013 10:21:02PM 3 points [-]

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best? People function based on heuristics, which are calibrated on general cases, not on marginal cases. While I'm all for showing inconsistencies in one's statements, there is no inconsistency in saying "as a general rule, I value X, but in these cases, I value Y, which is different from X".

Why the impetus towards some one-size-fit-all solution? And more importantly, why disallow that marginal cases get special "if-clauses"?

Imagine forcing a programmer to treat all incoming data with the exact same rule. It would be a disaster. Adding a "as a general rule" solves the inconsistencies, and it's not cheating, and it's not something in need of fixing.

Comment author: DanArmak 28 July 2013 10:44:58PM 2 points [-]

If you want your choices to be consistent over time, you still need a meta-rule for choosing and modifying your rules. How do you know what exceptions to make?

Personally, I don't think my choices (as a human) can be consistent in this sense, and I'm pretty resigned to following my inconsistent moral intuitions. Others disagree with me on this.

Comment author: MugaSofer 29 July 2013 03:51:14PM 0 points [-]

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best?

Well, obviously this wouldn't hold for, say, paperclippers ... but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

Imagine forcing a programmer to treat all incoming data with the exact same rule.

Such a (highly complex) rule is known as a "program".

Comment author: wedrifid 29 July 2013 04:00:11PM 1 point [-]

but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.)

As a bonus, the exception class of "enemies" and "immoral monsters" tends to be contrived to include anyone who has a sufficient degree of difference in ethical preferences. All True humans are ethically united...

Comment author: MugaSofer 31 July 2013 02:51:22PM 0 points [-]

I'm torn between grinning at how marvelously well-contrived it is on evolution's part and frustrated that, y'know, I have to live here, and I keep stepping in the mindkill.

Of course, I'll note they're usually wrong. Except about some of the psychopaths, I suppose, though even they seem to contain bits of it if I understand correctly.

Comment author: Lumifer 29 July 2013 08:14:27PM 10 points [-]

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd!

That's a common fallacy. Let me illustrate:

The notions of hot and cold water are nonsensical. The water temperature is continuous from 0C to 100C. How would you divide this into distinct areas? You would have to draw a line between neighboring values different by tiny fractions of a degree, but that seems absurd!

Comment author: drnickbone 30 July 2013 06:04:23PM 3 points [-]

For a morally relevant example, it is quite absurd to suppose that humans aged 18 years and 0 days are mature enough to vote, whereas humans aged 17 years and 364 days are not mature enough. So voting ages are morally unacceptable?

Ditto: ages for drinking alcohol, sexual consent, marriage, joining the armed services etc.

Comment author: Carinthium 04 August 2013 06:44:06AM 1 point [-]

Actually, there is a case to say that they are. Discrimination by category membership, instead of on a spectrum, means that candidates which have more merit are passed aside in favor of ones with lesser merit- particularly in the case of species, this can be problematic. The right of a person to be judged on their merits, if asked in abstract, would be accepted.

The only counter-case I can think of it is to say that society simply does not have the resources to discriminate (since discrimination it is) more precisely. However, even this does not entirely work out as within limits society could easily improve it's classification methods to better allow for unusual cases.

Comment author: Lukas_Gloor 30 July 2013 02:59:39AM *  3 points [-]

I'm not the one arguing for dividing this up into distinct areas, my whole point was to just look at the relevant criteria and nothing else. If the relevant criterion is temperature, you get a gradual scale for your example. If it is sentience, you have too look for each individual animal separately and ignore species boundaries for that.

Comment author: dspeyer 05 August 2013 02:14:39AM 0 points [-]

The usual solution involving water temperature is to have levels of suitability.

I want to shower in hot water, not cold water. Absurd? Not really. Just simplified. In fact, the joy I will gain from a shower is a continuous function of water temperature with a peak somewhere near 45C. The first formulation just approximated this with a piecewise line function for convenience.

Carrying the analogy back, we can propose that the moral weight of suffering is proportional to the sentience of the sufferer. Estimating degrees of sentience now becomes important. ISTR that research review board have stricter standards for primates than rodents, and rodents than insects, so aparently this isn't a completely strange idea.

Comment author: Qiaochu_Yuan 29 July 2013 12:01:06AM *  13 points [-]

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

Comment author: Zvi 29 July 2013 12:59:42PM 6 points [-]

It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!

Comment author: Lukas_Gloor 29 July 2013 02:03:59PM 5 points [-]

No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.

Comment author: Zvi 29 July 2013 02:32:58PM 1 point [-]

Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change.

Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?

Comment author: Lukas_Gloor 29 July 2013 02:56:47PM *  2 points [-]

No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).

Comment author: Zvi 29 July 2013 03:22:37PM 1 point [-]

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument:

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

Comment author: Lukas_Gloor 29 July 2013 12:07:33AM 12 points [-]

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.

Comment author: Zvi 29 July 2013 01:07:49PM 11 points [-]

Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

Comment author: Vaniver 29 July 2013 01:20:19AM 10 points [-]

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness.

I don't think that's a "but on the other hand;" I think that's a "it is a good way to raise awareness because it promotes mindkilled attitude."

Comment author: SaidAchmiz 29 July 2013 02:30:13AM 2 points [-]

Actually, I think it's precisely the parallels to racism and sexism that are invalid. Perhaps ableism? That's closer, at any rate, if still not really the same thing.

Comment author: Xodarap 30 July 2013 12:23:18PM 2 points [-]

It's not sexist to say that women are more likely to get breast cancer. This is a differentiation based on sex, but it's empirically founded, so not sexist.

Similarly, we could say that ants' behavior doesn't appear to be affected by narcotics, so we should discount the possibility of their suffering. This is a judgement based on species, but is empirically founded, so not speciesist.

Things only become _ist if you say "I have no evidence to support my view, but consider X to be less worthy solely because they aren't in my race/class/sex/species."

I genuinely don't think anyone on LW thinks speciesism is OK.

Comment author: SaidAchmiz 30 July 2013 01:14:05PM 6 points [-]

You evade the issue, I think. It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."?

Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.)

No one is saying "I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?"

We have tons of empirical data about differences between the species. The argument is about exactly which of the differences matter, and that is unlikely to be settled by passing the buck to empiricism.

Comment author: [deleted] 31 July 2013 10:25:36AM 3 points [-]

Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.)

I wouldn't say it is, but other people would use the word ā€œsexistā€ with a broader sense than mine (assuming that each person defines ā€œsexismā€ and ā€œracismā€ in analogous ways).

Comment author: MugaSofer 31 July 2013 03:01:57PM 0 points [-]

"I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?"

Upvoted just for this.

Comment author: Xodarap 30 July 2013 11:22:57PM *  0 points [-]

It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."?

No. Because your statement "X is less worthy because they aren't of my gender" in that case is synonymous with "X is less worthy because they lack attribute Y", and so gender has left the picture. Hence it can't be sexist.

Comment author: SaidAchmiz 30 July 2013 11:38:56PM 3 points [-]

Ok, but if you construe it that way, then "X is less worthy just because of their gender" is a complete strawman. No one says that. What people instead say is "people of type T are inferior in way W, and since X is a T, s/he is inferior in way W".

Examples: "women are less rational than men, which is why they are inferior, not 'just' because they're women"; "black people are less intelligent than white people, which is why they are inferior, not 'just' ..."; etc.

By your construal, are these things not sexist/racist? But then neither is this speciesist: "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans".

Comment author: Xodarap 30 July 2013 11:48:34PM 1 point [-]

I think we are getting into a discussion about definitions, which I'm sure you would agree is not very productive.

But I would absolutely agree that your statement "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans" is not speciesist. (It is empirically unlikely though.)

Comment author: SaidAchmiz 30 July 2013 11:56:53PM 0 points [-]

Agreed entirely, let's not argue about definitions.

Do we disagree on questions of fact? On rereading this thread, I suspect not. Your thoughts?

Comment author: Lumifer 30 July 2013 06:23:47PM 5 points [-]

I genuinely don't think anyone on LW thinks speciesism is OK.

Ah, the slaying of a beautiful hypothesis by one little ugly fact... :-D

I do feel speciesism is perfectly fine.

Comment author: Emile 30 July 2013 09:22:20PM 3 points [-]

Same here, I think speciesism is a fine heuristic here and now (it may not be so in the future).

Comment author: Xodarap 30 July 2013 11:36:20PM 1 point [-]

If it's a heuristic, then it's not speciesism.

If it's a "heuristic" that overrides lots of evidence, then it's speciesism. Which is just another way of saying that you aren't performing a Bayesian update correctly.

Comment author: MugaSofer 29 July 2013 03:46:50PM 1 point [-]

Maybe I was already mindkilled (vegetarian speaking), but it seems like a precisely appropriate term to use, given the content of this post.

What term would you prefer?

[Bonus points: if racism and speciesism were well-known errors of the past, would sexist!you object to the term "sexism" on the same grounds?]

Comment author: Qiaochu_Yuan 29 July 2013 06:52:51PM *  1 point [-]

Humanism, maybe. Yes.

Comment author: Qiaochu_Yuan 29 July 2013 07:08:26PM *  7 points [-]

Also, standard argument against a short, reasonable-looking list of ethical criteria: no such list will capture complexity of value. They constitute fake utility functions.

Comment author: Lukas_Gloor 30 July 2013 03:02:23AM 0 points [-]

My utility function feels quite real to me and I prefer simplicity and elegance over complexity. Besides, I think you can still have lots of terminal values and not discriminate against animals (in terms of suffering), I don't think that's mutually exclusive.

Comment author: Nick_Beckstead 29 July 2013 08:39:54AM *  2 points [-]

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

This objection doesn't work if you rigidify over the beings you feel sympathy toward in the actual world, given your present mental capacities. And that is clearly the best version of this view, and the one that people probably really mean when they say this. On this version of the view, you don't say that if you didn't care about humans, human's wouldn't matter. You do have to say, "If it actually turns out that I don't care about humans, then humans don't matter." Of course, you might want to change the view if things (very unexpectedly!) don't turn out that way.

I don't think this version gives animals no weight, but I think it typically gives animals less weight than humans. (Disclaimer that should be unnecessary: I recognize that there are other objections to H. It is not necessary to respond to what I have said by raising a distinct objection to H.)

Comment author: [deleted] 29 July 2013 02:07:40PM 4 points [-]

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

People get anaesthesia before undergoing surgery and get drunk before risking social embarrassment all the time.

Comment author: Lukas_Gloor 29 July 2013 02:12:36PM *  5 points [-]

Animals are not walking around anaesthetized, and I don't think the primary reason why alcohol helps with pain is that it makes you dumber (I might be wrong about this).

Comment author: Carinthium 04 August 2013 06:46:05AM *  1 point [-]

Anaethesia reduces pain, which is the primary reason people take it. Getting drunk reduces inhibitions (which is good if you're trying to do something despite embarrasment), plus you tend not to remember the events afterwards.

EDIT: Just trying to clarify ice9's point here, to be clear.

Comment author: Manfred 28 July 2013 09:34:35PM *  4 points [-]

Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form.

The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than "humans are a platonic form." Currently you just knock down a very weak argument or two and then rush to conclusion.

EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced "argument from future potential," which is what Jabberslythe replied to.

However, you really do only knock down weak arguments. What if we simply define categories more robustly than "platonic forms," like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there's no point in talking about platonic forms.

For the argument from "one will be human and the next will be not" how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don't just have to avoid deliberately strawmanning, you have to actively steelman :)

Comment author: Lukas_Gloor 28 July 2013 10:22:00PM 3 points [-]

The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue "if you're going for that amount of arbitrariness anyway, why even bother?" The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.

Comment author: [deleted] 29 July 2013 02:14:20PM 2 points [-]
Comment author: Morendil 28 July 2013 09:34:24PM 3 points [-]

What properties do human beings possess that makes us think that it is wrong to torture them?

Does it have to be the case that "the properties that X possesses" is the only relevant input? It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

For instance, if I came across a kid torturing a mouse (even a fly) I would be horrified, but I would respond differently to a cat torturing a mouse (or a fly).

Comment author: Lukas_Gloor 28 July 2013 09:49:26PM *  3 points [-]

What if it is done by a baby or a kid with mental impairments so she cannot follow moral/social norms? I see no reason to treat the situation differently in such a case. (Except that one might want to talk to the parents of the kid in order to have them consider a psychological check-up for their child.)

Comment author: DanArmak 28 July 2013 10:32:23PM 1 point [-]

I see no reason to treat the situation differently in such a case.

Differently from a normal kid, or differently from a cat? (I share Morendil's moral intuitions regarding his example.)

Comment author: Lukas_Gloor 28 July 2013 10:55:19PM 6 points [-]

From the cat. I would in fact press a magic button that turns all carnivores into vegans. The cat (or the kid) doesn't know what it is doing and cannot be meaningfully blamed, but I still consider this to be a harmful action and I would want to prevent it. Who commits the act makes no difference to me (or only for indirect reasons).

Comment author: SaidAchmiz 28 July 2013 08:46:13PM *  3 points [-]

I've read the first part of the post ("What is Speciesism?"), and have a question.

Does your argument have any answer to applying modus tollens to the argument from marginal cases?

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

[1] Note that I can still be in favor of laws that prohibit infanticide, for game-theoretic reasons. (For instance, because birth makes a good bright line, as does the species divide. The details of effective policy and optimal criminal justice laws are an interesting conversation to have but not of any great relevance to the moral debate.)

Edit: Having now read the rest of your post, I see that you... sort-of address this point. But to be honest, I don't think you take the opposing position very seriously; I get the sense that you've constructed arguments that you think someone on the opposite side would make, if they held exactly your views in everything except, inexplicably, this one area, and these arguments you then knock down. In short, while I am very much in favor of having this discussion and think that this post is a good idea... I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

Comment author: [deleted] 29 July 2013 01:59:01PM 4 points [-]

bright line

Huh, a mainstream term for what LWers call a Schelling fence!

Comment author: Lukas_Gloor 28 July 2013 09:42:17PM *  8 points [-]

I don't think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.

The post you link to makes five points.

1) and 2) don't concern the arguments I'm making because I left out empirical issues on purpose.

3) is also an empirical issue that can be applied to some humans as well.

4) is the most interesting one.

Something About Sapience Is What Makes Suffering Bad

I sort of addressed this here. I must say I'm not very familiar with this position so I might be bad at steelmanning it, but so far I simply don't see why intelligence has anything to do with the badness of suffering.

As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren't sentient, I genuinely wouldn't argue about giving them moral consideration.

Comment author: Lukas_Gloor 28 July 2013 08:59:19PM *  4 points [-]

No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views.

And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.

Comment author: Xodarap 28 July 2013 08:54:04PM 2 points [-]

I think it's ok to kill human newborns/infants

I think the relevant response would be torturing human infants, and other marginal cases.

Comment author: Larks 29 July 2013 12:59:35PM 1 point [-]

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

In the past, the arguments against sexism and racism were things like "they're human too", "they can write poetry too", "God made all men equal" and "look how good they are at being governesses". None of these apply to animals; they're not human, they don't write human, God made them to serve us, and they're not very good governesses. Indeed, you seem to think all these are irrelevant criteria.

Speaking as a 21st century person in a liberal, western country, I believe sexism and racism are wrong basically because other people told me they were, who believed that because ... who believed that because they were convinced by argumentum ad governess. But now I've just discovered that argumentum ad governess is invalid. Should I not withdraw my belief that sexism and racism are wrong, which apparently I have in some sense been fooled into, and adopt the traditional, time-honoured view that they are not?

Comment author: bokov 12 August 2013 11:10:45PM *  1 point [-]

I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.

Why should there be a normative ethics at all? What part of rationality requires normative ethics?

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there. So, nevermind cows and pigs, if push came to shove I'll protect my friends and family in preference to strangers. However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.

So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.

In short, the reason I'd rather have dinner with you than of you is some combination of me liking you and my pre-commitment to peaceful and civilized coexistence. It's not exactly something I feel like a nice person for admitting, but I don't see why that should be enough to make it a tough issue.

Comment author: [deleted] 13 August 2013 12:38:38AM 2 points [-]

Nonhuman animals are integrated with human "monkey spheres" - e.g. people live with their pets, bond with them and give them names.

A second mistake is that you decry normative ethics, only to implicitly establish a norm in the next paragraph as if it were a fact:

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there. So, nevermind cows and pigs...

Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc. By prescribing to a monkey-sphere that "everyone" has and that doesn't include nonhuman animals, you are effectively telling us what we <i>should</i> care about, not what we actually care about.

Even if you don't care about animal welfare, the fact that others do has an influence on your "monkey-sphere", even if it's weak.

Btw, aren't humans apes rather than monkeys?

Comment author: AndHisHorse 13 August 2013 12:45:01AM 1 point [-]

The term "monkeysphere", which is a nickname for Dunbar's Number, originates from this Cracked.com article. The term relates not only to the studies done on monkeys (and apes), but also the idea of there existing a limit on the number of named, cutely dressed monkeys about which a hypothetical person could really care.

Comment author: bokov 13 August 2013 08:56:53PM 1 point [-]

Yes, precisely. Thanks for finding the link.

Although I think of mine as a density function rather than a fixed number. Everyone has a little bit of my monkey-sphere associated with them. hug

Comment author: bokov 13 August 2013 08:54:11PM 0 points [-]

Nonhuman animals are integrated with human "monkey spheres" - e.g. people live with their pets, bond with them and give them names.

Oh yeah, absolutely. I trust my friend's judgment how much members of her monkeysphere are worth to her, and utility to my friend is weighed against utility to others in my monkeysphere proportional to how close they are to me.

My monkeysphere has long tails extending by default to all members of my species whose interests are not at odds with my own or those closer to me in the monkeysphere. Since I would be willing to use force against a human to defend myself or others at the core of my monkeysphere, it seems that I should be even more willing to use force against such a human and save the lives of several cattle in the process.

Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc.

Cults are well-funded too. I don't dispute that people care about both them and animal rights. What I dispute is whether supporting either of them offers enough benefits to the supporter that I would consider it a rational choice to make.

Comment author: Lukas_Gloor 13 August 2013 01:17:00AM 3 points [-]

I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.

Why should I believe what humans have been selected for? Why would I want to keep "us" alive?

I think those two questions are at least as begging as the reasons for my view, if not more.

What I know for sure is that I dislike my own suffering, not because I'm sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else.

Why should there be a normative ethics at all? What part of rationality requires normative ethics?

All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven't thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately.

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there.

I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists?

However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.

And how do you do that?

So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.

If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don't tell yourself that you're being a decent person only because of pure self-interest, you'd be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not "break the rules" is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught.

You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?

Comment author: bokov 13 August 2013 08:39:39PM 0 points [-]

You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?

I don't know, and I feel it's important that I admit that. My code of conduct is incomplete. It's better that it be clearly incomplete than have the illusion of completeness created by me deciding what a hypothetical me in a hypothetical situation ought to want.

It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don't currently do this (me included) are apparently already being compensated sufficiently, however much that is.

Comment author: bokov 13 August 2013 08:24:23PM 0 points [-]

Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else.

Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it's not really cheating or working ever harder to suppress their temptations.

I've been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I'm a bastard. Now instead of agonizing whether my right to FOO outweighs Bob's right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR.

I still cheat, but I don't waste time on moral posturing. I try to treat it as a sign that perhaps I still don't fully understand my own utility function. Imagine how far off the mark I'd be if I was simultaneously trying to optimize Bob's!

Comment author: [deleted] 29 July 2013 02:03:43PM 1 point [-]

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it.

If there were no intrinsic reasons for a feather to fall slower than a rock, then in a vacuum a feather would fall just as fast as a rock as long as there's no air. But you don't neglect the viscosity of air when designing a parachute.

Comment author: timtyler 28 July 2013 10:31:27PM *  1 point [-]

Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others?

Typically human xenophobia doesn't single out one attribute. The similar are treated preferentially, the different are exiled, shunned, excluded or slaughtered. Nature builds organisms like that: to favour kin and creatures similar, and to give out-group members a very wide berth. So: it's no surprise to find that humans are often racist and speciesist.

Comment author: Carinthium 04 August 2013 06:53:16AM 1 point [-]

For selfish reasons, if I had a say in policy I would want to influence the world greatly against this. Whether true or not, I could easily get a disease in the future or go senile (actually quite likely) to such an extent that my moral worth in this system is reduced greatly. Since I still want to be looked after when that happens, I would never support this.

This doesn't refute any of the arguments, but for those who have some percentage chance of losing a lot of brain capacity in the future without outright dying (i.e probably most of us) it may be a reason to argue against this idea anyway.

Comment author: pianoforte611 29 July 2013 04:08:54AM *  1 point [-]

Hmm, maybe I didn't read the argument carefully enough, but it seems that the argument from marginal cases proves too much ., non-US citizens should be allowed to serve in the army, some people without medical licenses should be allowed to practice as surgeons and many more things.

Comment author: wedrifid 29 July 2013 07:22:10AM 2 points [-]

but it seems that the argument from marginal cases proves too much . It proves that non-US citizens should be allowed to serve in the US army,

The argument from marginal cases may well prove too much, but this strikes me as a failed counter-example. Using non-citizens as part of a military force is a reasonably standard practice. Depending on the circumstances it can be the smart thing to do. (Conscripting citizens as cannon fodder tends to promote civil unrest.)

Comment author: pianoforte611 29 July 2013 11:53:53AM *  2 points [-]

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

Comment author: wedrifid 29 July 2013 03:54:08PM 4 points [-]

Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.

This slippery scope really isn't sounding all that bad...

Comment author: itaibn0 31 July 2013 07:30:32PM 3 points [-]
Comment author: Eugine_Nier 04 August 2013 06:31:33AM *  -2 points [-]

Not one of Scott's better ideas.

Comment author: MugaSofer 04 August 2013 03:55:12PM *  1 point [-]

You mean his other ideas are even better!? My God... (But seriously, folks ... what exactly are your counterarguments?)

Comment author: Eugine_Nier 06 August 2013 01:28:06AM 1 point [-]

I brought them up elsewhere in the thread.

Comment author: MugaSofer 29 July 2013 03:36:41PM *  3 points [-]

... what makes you think that's wrong? I remember being twelve, seems to me basing that sort of thing on numerical age is fairly daft, albeit relatively simple.

Comment author: Lukas_Gloor 29 July 2013 03:43:10PM 2 points [-]

Indeed, I wouldn't object to this directly. One could however argue that it is bad for indirect reasons. It would acquire huge administrative efforts to test teens for their competence at voting, and the money and resources might be better spent on education or the US army (jk). In order to save administrative costs, using a Schelling point at the age of, say, 18, makes perfect sense, even though there certainly is no magical change taking place in people's brains the night of their 18th birthday.

Comment author: DanArmak 30 July 2013 09:04:22PM *  4 points [-]

It would acquire huge administrative efforts to test teens for their competence at voting

(You meant require, not acquire)

It would also require huge administrative efforts to test 18-year-olds for competence. So we simply don't, and let them vote anyway. It's not clear to me that letting all 12-year-olds vote is so much terribly worse. They mostly differ from adults on age-relevant issues: they would probably vote school children more rights.

It may or may not be somewhat worse than the status quo, but (for comparison) we don't take away the vote from all convicted criminals, or all demented people, or all people with IQ below 60... Not giving teenagers civil rights is just a historical fact, like sexism and racism. It doesn't have a moral rationale, only rationalizations.

Comment author: Jiro 31 July 2013 12:09:37AM 0 points [-]

12 year olds are also highly influenced by their parents. It's easy for a parent to threaten a kid to make him vote one way, or bribe him, or just force him to stay in the house on election day if he ever lets his political views slip out. (In theory, a kid could lie in the first two scenarios, since voting is done in secret, but I would bet that a statistically significant portion of kids will be unable to lie well enough to pull it off.)

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll get from adding people ages 12-17 is just too large to be acceptable. (Exercise for the reader: why is 'well, some 18 year olds are immature anyway' not a good response?)

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

Comment author: [deleted] 31 July 2013 09:58:33AM 4 points [-]

12 year olds are also highly influenced by their parents.

And 75-year-olds are highly influenced by their children. (And 22-year-olds are highly influenced by their friends, for that matter.)

(I'm not saying we should allow 12-year-olds to vote, but just that I don't find that particular argument convincing.)

Comment author: OnTheOtherHandle 31 July 2013 07:40:46PM 2 points [-]

I don't find arguments against letting children vote very convincing either, except the argument that 18 is a defensible Schelling point and it would become way too vulnerable to abuse if we changed it to a more complicated criterion like "anyone who can give informed consent, as measured by X." After all, if we accept the argument that 12-17 year olds should vote (and I'm not saying it's a bad argument), then the simplest and most effective way to enforce that is to draw another arbitrary line based on age, at some lower age. Anything more complex would again be politicized and gamed.

But I think you're misrepresenting the "influenced by parents" argument. 22-year-olds are influenced by their friends, yes, but they influence their friends to roughly the same degree. Their friends do not have total power over their life, from basic survival to sources of information. A physical/emotional threat from a friend is a lot less credible than a threat from your parents, especially considering most people have more than one circle of friends. The same goes for the 75-year-old - they may be frail and physically dependent on their children, but society doesn't condone a live-in grandparent being bossed around and controlled the way a live-in child is, so that is not as big a concern.

Comment author: [deleted] 01 August 2013 12:43:19PM 2 points [-]

The same goes for the 75-year-old - they may be frail and physically dependent on their children, but society doesn't condone a live-in grandparent being bossed around and controlled the way a live-in child is

Indeed, we outsource the job to nursing homes instead.

Comment author: wedrifid 31 July 2013 03:15:15AM 6 points [-]

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll" get from adding people ages 12-17 is just too large to be acceptable.

"Maturity" isn't obviously a desirable thing. What people tend to describe as 'maturity' seems to be a developed ability to signal conformity and if anything is negative causal influence on the application of reasoned judgement. People learn that it is 'mature' to not ask (or even think to ask) questions about why the cherished beliefs are obviously self-contradicting nonsense, for example.

I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not. Particularly given that it would almost certainly result in more voting-relevant education being given to children and so slightly less ignorance even among adults.

Comment author: OnTheOtherHandle 31 July 2013 07:47:58PM 3 points [-]

"Maturity" is pretty much a stand-in for "desirable characteristics that adults usually have and children usually don't," so it's almost by definition an argument in favor of adults. But to be fair, characteristics like the willingness to sit through/read boring informational pieces in order to be a more educated voter, the ability to accurately detect deception and false promises, and the ability to use past evidence to determine what is likely to actually happen (as opposed to what people say will happen) are useful traits and are much more common in 18-year-olds than 12-year-olds.

Comment author: Nornagest 31 July 2013 03:54:42AM *  3 points [-]

I might be a little more generous than that. The term casts a pretty broad net, but it also includes some factors I'd consider instrumentally advantageous, like self-control and emotional resilience.

I'm not sure how relevant those are in this context, though.

Comment author: wedrifid 31 July 2013 06:15:11AM 3 points [-]

The term casts a pretty broad net, but it also includes some factors I'd consider instrumentally advantageous, like self-control and emotional resilience.

I certainly recommend maturity. I also note that the aforementioned signalling skill is also significantly instrumentally advantageous. I just don't expect the immaturity of younger voters to result in significantly worse voting outcomes.

Comment author: [deleted] 31 July 2013 09:55:22AM 1 point [-]

Particularly given that it would almost certainly result in more voting-relevant education being given to children

Interesting argument, I had never thought of that. I'm still sceptical about what the quality of such voting-relevant education would be.

and so slightly less ignorance even among adults.

On timescales much longer than politicians usually think about.

Comment author: Eugine_Nier 10 August 2013 04:30:10AM 0 points [-]

I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not. Particularly given that it would almost certainly result in more voting-relevant education being given to children and so slightly less ignorance even among adults.

In my experience "voting-relevant education" tends to mean indoctrination, so no.

Comment author: MugaSofer 31 July 2013 03:08:44PM 2 points [-]

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

You know, I can think of a worse test than that ... eh, I'm not even going to bother working out a complex "age test" metaphor, I'm just gonna say it: age is a worse criterion than that test.

Comment author: Jiro 01 August 2013 12:41:47AM *  0 points [-]

You might be able to argue that since people of different races don't live to the exact same age, an age test is still biased, but I'd like to see some calculations to show just how bad it is. Also, even though an age test may be racially biased, there aren't really better and worse age tests--it's easy to get (either by negligence or by malice) an IQ test which is biased by multiple times the amount of a similar but better IQ test, but pretty much impossible to get that for age.

There's also the historical record to consider. It's particularly bad for IQ tests.

Comment author: DanArmak 31 July 2013 10:22:27AM 2 points [-]

I'd like to add this to the other posters' responses:

Also, 12 year olds are less mature than 18 year olds.

Please taboo "immaturity" for me. After all, if taken literally it just means "not the same as mature, adult people". But the whole point of letting a minority vote is that they will not vote the same way as the majority.

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect.

How is this different from saying that no test of 12-year-olds for "maturity" is perfect and therefore we do not give the vote to any 12 year olds at all?

Comment author: Jiro 31 July 2013 10:48:20PM 2 points [-]

How is this different from saying that no test of 12-year-olds for "maturity" is perfect and therefore we do not give the vote to any 12 year olds at all?

It isn't all that different, but all that that proves is that we shouldn't decide who votes based on maturity tests any more than we should on IQ tests.

Comment author: Eugine_Nier 04 August 2013 06:46:01AM 0 points [-]

The problem with letting 12 year olds vote is not that they'd be overly influenced by their parents, it's that they they're worse at seeing through the various dark arts techniques people routinely employ and this would have the result of making politics even more of a dark arts contest than it already is.

Comment author: MugaSofer 04 August 2013 02:43:17PM 1 point [-]

So we should test for resistance to Dark Arts Techniques, rather than base it on age? Excellent idea!

Comment author: Eugine_Nier 06 August 2013 01:25:36AM 0 points [-]

And how exactly to you propose doing testing in a way that doesn't run into the problems with Goodhart's law I mentioned here?

Comment author: MugaSofer 04 August 2013 04:30:45PM 1 point [-]

12 year olds are also highly influenced by their parents. It's easy for a parent to threaten a kid to make him vote one way, or bribe him, or just force him to stay in the house on election day if he ever lets his political views slip out. (In theory, a kid could lie in the first two scenarios, since voting is done in secret, but I would bet that a statistically significant portion of kids will be unable to lie well enough to pull it off.)

Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll get from adding people ages 12-17 is just too large to be acceptable. (Exercise for the reader: why is 'well, some 18 year olds are immature anyway' not a good response?)

Don't these two arguments cancel each other out? How can you simultaneously be concerned that children will vote immaturely and vote the same way as their parents?

And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

My favourite response to this is to retain the "everyone gets to vote at 18" aspect regardless of child enfranchisement. At least until you have tests people find acceptable or whatever.

Comment author: Jiro 05 August 2013 03:50:02AM 2 points [-]

How can you simultaneously be concerned that children will vote immaturely and vote the same way as their parents?

I have described two separate failure modes. I see no reason to believe that the two failure modes would cancel each other out.

My favourite response to this is to retain the "everyone gets to vote at 18" aspect regardless of child enfranchisement.

That doesn't work. If everyone above age 18 can vote, black children can vote down to IQ 65, and white children can vote down to IQ 60, the result will still be skewed, although not by as much as if the IQ test was applied to everyone.

Comment author: MugaSofer 06 August 2013 02:13:32PM 3 points [-]

I see no reason to believe that the two failure modes would cancel each other out.

... you don't? Could you explain your reasoning on this?

That doesn't work. If everyone above age 18 can vote, black children can vote down to IQ 65, and white children can vote down to IQ 60, the result will still be skewed, although not by as much as if the IQ test was applied to everyone.

It doesn't work perfectly. That's far from the same thing as not working at all.

Comment author: linkhyrule5 31 July 2013 01:06:39AM 1 point [-]

"Well, some 18 year olds are immature anyway" is not a good response, but "show me your data that places 12-17 yo people significantly more immature then the rest of humanity, and taboo "immaturity" while you're at it" is.

The first two, sadly, do make more sense, but then emancipation should become qualification to vote.

Comment author: OnTheOtherHandle 31 July 2013 07:53:01PM *  4 points [-]

One thing that hasn't been mentioned yet is that pure experience - just raw data in your long-term memory - is a plausible criterion for a good voter. It's not that intelligence and rationality is unimportant, since rational, intelligent people may well draw more accurate conclusions from a smaller amount of data.

What does matter is that everyone, no matter how intelligent or unintelligent, would be better off if they have a few elections and a few media scandals and a few internet flame wars and a few nationally significant policy debates stored in their long-term memory. Even HJPEV needs something to go on. The argument is not just that 18-year-olds as a group are better voters than 12-year-olds as a group, but that any given 12-year-old would be a better voter in 6 years, even if they're already pretty good.

Comment author: Eugine_Nier 07 August 2013 12:44:22AM *  0 points [-]

Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.

This is not a priori obvious. In any case why are imperfections with the test that happen to be correlated with race worse than imperfections correlated with occupation, social class, or any other trait that could act as a proxy for political beliefs?

Comment author: Eugine_Nier 04 August 2013 06:33:03AM 0 points [-]

Not to mention the temptation to sneak political biases into the competency tests.

Comment author: [deleted] 31 July 2013 10:14:34AM 1 point [-]

Your memories of being twelve must be very different from mine.

Comment author: MugaSofer 31 July 2013 01:43:26PM 2 points [-]

Quite possibly. But then, that's rather the point, isn't it?

Comment author: SaidAchmiz 29 July 2013 01:17:26PM 1 point [-]

This seems like a reasonable thing to prove!

Comment author: Lukas_Gloor 29 July 2013 04:58:51AM *  3 points [-]

This would be mixing up the normative level with the empirical level. The argument from marginal cases seeks to establish that we have reasons against treating beings of different species differently, all else being equal. Under consequentialism, the best path of action (including motives, laws, societal norms to promote and so on) would already be specified. It would be misleading to apply the same basic moral reasoning again on the empirical level where we have institutions like the US army or the establishment of surgeons. Institutions like the US army are (for most people anyway and outside of political philosophy) not terminal values. Whether it increases overall utility if we enforce "non-discrimination" radically in all domains is an empirical question determined by the higher order goal of achieving as much utility as possible.

And whenever this is not the case (which it may well be, since there is no reason to assume that the empirical level perfectly mirrors the normative one), then "all else" is not equal. Because it might not be overall beneficial for society / in terms of your terminal values, it could be a bad idea to allow an otherwise well-qualified someone without a medical license to practice as a surgeon. There might be negative side-effects of such a practice.

A practical example of this would be animal testing. If enough people were consequentialists and unbiased, we could experiment on humans and thereby accelerate scientific progress. However, if you try to do this in the real world, there is the danger that it will go wrong because people lose track of altruistic goals and replace them with other things (altough this argument applies to animal testing as well almost as much), and there is a big likelihood of starting a civil war or worse if someone would actually start experimenting on humans (this one doesn't). So even though experimenting on animals is intrinsically on par with experimenting on humans with similar cognitive capacities, only the former even stands a chance at increasing overall utility rather than decreasing it. Here the indirect consequences are decisive.

(Edit: In this sense, my example about men and a right to abortion was misleading, because that would of course be a legal right, where empirical factors come into play. But I was using the example to show that being against some form of discrimination doesn't mean that all differences between beings ought to be ignored.)

Comment author: pianoforte611 29 July 2013 06:43:05PM 1 point [-]

Thank you for the response, I think I get the argument now.

I don't have a good answer for why we allow animal testing but not human testing. If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them. While the idea does make me uncomfortable I think I would bite that bullet.

Comment author: Eugine_Nier 04 August 2013 06:30:06AM 0 points [-]

If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them.

The problem is that it makes the Schelling points more awkward.

Comment author: blacktrance 07 January 2014 12:26:10AM 0 points [-]

Here's an argument for something that might be called speciesism. though it isn't strictly speciesism because moral consideration could be extended to hypothetical non-human beings (though no currently known ones) and not quite to all humans - contractarianism. We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it. When these criteria are fulfilled, a being has rights and should not be harmed, but otherwise, we have no reason to restrict ourselves in our dealings with it.

Comment author: Angela 06 January 2014 10:47:51PM 0 points [-]

If some means could be found to estimate phi for various species, a variable claimed by this paper to be a measure of "intensity of sentience", it would the relative value of the lives of different animals to be estimated and would help solve many moral dilemmas. Intensity of suffering as a result of a particular action would be expected to be proportionate to the intensity of sentience, however whilst mammals and birds (the groups which possess neocortex, the parts of the brain where consciousness is believed to occur) can be assumed to experience suffering when doing activities that decrease their evolutionary fitness (natural beauty etc. also determine pleasure and pain and are as yet poorly understood, but they are likely to be less significant in other species anyway, extrapolating from the differences in aesthetics from humans with high vs low IQ). However for AI it is much harder to determine what makes it happy or whether or not it enjoys dying, for which we will need to find a simple generalisable definition of suffering that can apply to all possible AI rather than our current concept which is more of an unrigorous Wittgensteinian family resemblance.