Xodarap comments on Arguments Against Speciesism - LessWrong

28 Post author: Lukas_Gloor 28 July 2013 06:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (474)

You are viewing a single comment's thread. Show more comments above.

Comment author: SaidAchmiz 28 July 2013 09:07:58PM 1 point [-]

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

Indeed. I've alluded to this before as "how many chickens would I kill/torture to save my grandmother?" The answer, of course, is N, where N may be any number.

This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following:

  1. Additive aggregation of value.
  2. Valuing my grandmother a finite amount (as opposed to an infinite amount).
  3. Valuing a chicken a nonzero amount.

Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway... but it also leads to problems (don't I think that killing or torturing two people is worse than killing or torturing one person? I sure do!).

Throwing out #3 seems unproblematic.

Comment author: Xodarap 28 July 2013 09:39:26PM *  1 point [-]

The problem with throwing out #3 is you also have to throw out:

(4) How we value a being's moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Which is a rather nice proposition.

Edit: As Said points out, this should be:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)

Comment author: SaidAchmiz 28 July 2013 09:54:21PM *  0 points [-]

You don't, actually. For example, the following is a function:

Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as "human-level abilities". We define E(a) thus:

a < H : E(a) = 0.
aH: E(a) = f(a), where f(x) is some other function of our choice.

Comment author: Xodarap 28 July 2013 10:10:00PM *  0 points [-]

Fair enough. I've updated my statement:

(4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain).

Otherwise we could let H be "maleness" and justify sexism, etc.

Comment author: SaidAchmiz 28 July 2013 10:25:41PM *  0 points [-]

Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks!

Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly "nice" anymore (that is, I don't endorse it, and I don't think most people here who take the "speciesist" position do either).

(By the way, letting H be "maleness" doesn't make a whole lot of sense. It would be very awkward, to say the least, to represent "maleness" as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling "maleness" a "level of abilities" is pretty weird.)

Comment author: Xodarap 28 July 2013 11:37:29PM 0 points [-]

Haha, sure, updated.

But why don't you think it's "nice" to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you're in pain than when others are in pain.

Comment author: SaidAchmiz 28 July 2013 11:48:36PM *  0 points [-]

I probably[1] do as well...

... provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post).

[1] Well, at first glance. Actually, I'm not so sure; I don't seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that's what matters.

Comment author: Xodarap 30 July 2013 12:04:45PM 0 points [-]

Well, if you follow that post far enough you'll see that the author thinks animals feel something that's morally equivalent to pain, s/he just doesn't like calling it "pain".

But assuming you genuinely don't think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn't list any supporting evidence.

Comment author: SaidAchmiz 30 July 2013 02:59:22PM 0 points [-]

But assuming you genuinely don't think animals feel something morally equivalent to pain, why?

I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took.

I didn't say anything about animals not feeling pain (what does it "morally equivalent to pain" mean?). I said I don't care about animal pain.

... the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we're talking past each other.

Comment author: Xodarap 30 July 2013 11:44:44PM 0 points [-]

I apologize for the confusion. Let me attempt to summarize your position:

  1. It is possible for subjectively bad things to happen to animals
  2. Despite this fact, it is not possible for objectively bad things to happen to animals

Is that correct? If so, could you explain what "subjective" and "objective" mean here - usually, "objective" just means something like "the sum of subjective", in which case #2 trivially follows from #1, which was the source of my confusion.