The argument from marginal cases claims that you can't both think that humans matter morally and that animals don't, because no reasonable set of criteria for moral worth cleanly separates all humans from all animals. For example, perhaps someone says that suffering only matters when it happens to something that has some bundle of capabilities like linguistic ability, compassion, and/or abstract reasoning. If livestock don't have these capabilities, however, then some people such as very young children probably don't either.
This is a strong argument, and it avoids the noncentral fallacy. Any set of qualities you value are going to vary over people and animals, and if you make a continuum there's not going to be a place you can draw a line that will fall above all animals and below all people. So why do I treat humans as the only entities that count morally?
If you asked me how many chickens I would be willing to kill to save your life, the answer is effectively "all of them". [1] This pins down two points on the continuum that I'm clear on: you and chickens. While I'm uncertain where along there things start getting up to significant levels, I think it's probably somewhere that includes no or almost no animals but nearly all humans. Making this distinction among humans, however, would be incredibly socially destructive, especially given how unsure I am about where the line should go, and so I think we end up with a much better society if we treat all humans as morally equal. This means I end up saying things like "value all humans equally; don't value animals" when that's not my real distinction, just the closest schelling point.
[1] Chicken extinction would make life worse for many other people, so I wouldn't actually do that, but not because of the effect on the chickens.
I also posted this on my blog.
Question: If a person is concerned about the existential risks of species, and a person is concerned with lessening suffering of common species of animals, and a person is concerned with human lives, how does that person make tradeoffs among those?
I was thinking about this, and I realized I had no idea how to resolve the following problem:
Omega says "Hi. I can institute anyone one of these three policies, but only one at a time. Other than locking out the other policies, for each year the policy is in place, none has a downside... except that I will mercilessly dutch book you with policy offers if you're inconsistent with your judgement of the ratios."
Policy A: Save X Common Non-Human Animals capable of feeling pain per year from painful, pointless, executions that will not overall affect the viability of the that Species of Common Non-Human Animals.
Policy B: Save Y rarer species per year from extinction. These can be anything from Monkeys, to Mites, to Moss (So they may not have a nervous system).
Policy C: Save Z Humans capable of feeling pain per year from painful, pointless, executions that will not overall affect the viability of the Human Species.
Every time I attempt to construct some acceptable ratio of X:Y:Z, I seem to think "This doesn't seem correct." Thoughts?
Ultimately, it seems as hard to come up with a ratio of X:Y:Z as it would be to come up with a personal valuation ratio of Apples:Oranges:Education:747s:Laptops.
You are taking morality, which is some inborn urges you have when confronted with certain types of information, urges which started evolving in you long before your ancestors had anything approaching a modern neocortex, and which absolutely evolved in you without any kind of reference to the moral problem you are looking at in this comment. And you are trying to come up with a fixed-... (read more)