Xodarap comments on Why Eat Less Meat? - LessWrong

48 Post author: peter_hurford 23 July 2013 09:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 23 July 2013 11:03:59PM *  44 points [-]

As someone who agrees with (almost) everything you wrote above, I fear that you haven't seriously addressed what I take to be any of the best arguments against vegetarianism, which are:

  1. Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you're an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.

  2. Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.

  3. Experiential Suffering Needn't Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can't be confident that four-month-old fetuses feel pain, we can't be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn't causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?

  4. Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn't matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it's bad in non-humans? Perhaps the reason is something that falls victim to...

  5. Aren't You Just Anthropomorphizing Non-Humans? People don't avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets' shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we'd be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn't evidence that the thing empathized with is actually conscious.

I think these arguments can be resisted, but they can't just be dismissed out of hand.

You also don't give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).

Comment author: Xodarap 24 July 2013 01:36:34PM 2 points [-]

Regarding (4) (and to a certain extent 3 and 5): I assume you agree that a species feels phenomenal pain just in case it proves evolutionarily beneficial. So why would it improve fitness to feel pain only if you have "abstract thought"?

The major reason I have heard for phenomenal pain is learning, and all vertebrates show long-term behavior modification as the result of painful stimuli, as anyone who has taken a pet to the vet can verify. (Notably, many invertebrates do not show long-term modification, suggesting that vertebrate vs. invertebrate may be a non-trivial distinction.)

Richard Dawkins has even suggested that phenomenal pain is inversely related to things like "abstract thought", although I'm not sure I would go that far.

Comment author: RobbBB 24 July 2013 06:10:30PM *  -1 points [-]

Actually, I'm an eliminativist about phenomenal states. I wouldn't be completely surprised to learn that the illusion of phenomenal states is restricted to humans, but I don't think that this illusion is necessary for one to be a moral patient. Suppose we encountered an alien species whose computational substrate and architecture was so exotic that we couldn't rightly call anything it experienced 'pain'. Nonetheless it might experience something suitably pain-like, in its coarse-grained functional roles, that we would be monsters to start torturing members of this species willy-nilly.

My views about non-human animals are similar. I suspect their psychological states are so exotic that we would never recognize them as pain, joy, sorrow, surprise, etc. (I'd guess this is more true for the positive states than the negative ones?) if we merely glimpsed their inner lives directly. But the similarity is nonetheless sufficient for our taking their alien mental lives seriously, at least in some cases.

So, I suspect that phenomenal pain as we know it is strongly tied to the evolution of abstract thought, complex self-models, and complex models of other minds. But I'm open to non-humans having experiences that aren't technically pain but that are pain-like enough to count for moral purposes.

Comment author: davidpearce 25 July 2013 11:22:14AM 2 points [-]

RobbBB, in what sense can phenomenal agony be an "illusion"? If your pain becomes so bad that abstract thought is impossible, does your agony - or the "illusion of agony" - somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery - or the "illusion of misery" as the eliminativist puts it - as do abused human infants and toddlers.

Comment author: Xodarap 24 July 2013 10:38:23PM 0 points [-]

But I'm open to non-humans having experiences that aren't technically pain but that are pain-like enough to count for moral purposes.

I guess maybe I just didn't understand how you were using the term "pain" - I agree that other species will feel things differently, but being "pain-like enough to count for moral purposes" seems to be the relevant criterion here.