To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days.
People who say that vegetarianism is too hard generally don't mean that being too hard is the only reason they won't do it.
To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days.
People who say that vegetarianism is too hard generally don't mean that being too hard is the only reason they won't do it.
Agreed. The proper translation of "too hard" is usually "I don't care."
Now suppose that a chicken could have an experience that was phenomenally indistinguishable from that of the child.
"Phenomenally indistinguishable"... to whom?
In other words, what is the mind that's having both of these experiences and then attempting to distinguish between them?
Thomas Nagel famously pointed out that we can't know "what it's like" to be — in his example — a bat; even if we found our mind suddenly transplanted into the body of a bat, all we'd know is what's it's like for us to be a bat, not what it's like for the bat to be a bat. If our mind were transformed into the mind of a bat (and placed in a bat's body), we could not analyze our experiences in order to compare them with anything, nor, in that form, would we have comprehension of what it had been like to be a human.
Phenomenal properties are always, inherently, relative to a point of view — the point of view of the mind experiencing them. So it is entirely unclear to me what it means for two experiences, instantiated in organisms of very different species, to be "phenomenally indistinguishable".
Nagel had no problems with taking objective attributes of experience -- e.g. indicia of suffering -- and comparing them for the purposes of political and moral debate. The equivalence or even comparability of subjective experience (whether between different humans or different species) is not necessary for an equivalence of moral depravity.
As someone who agrees with (almost) everything you wrote above, I fear that you haven't seriously addressed what I take to be any of the best arguments against vegetarianism, which are:
Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you're an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.
Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.
Experiential Suffering Needn't Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can't be confident that four-month-old fetuses feel pain, we can't be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn't causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?
Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn't matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it's bad in non-humans? Perhaps the reason is something that falls victim to...
Aren't You Just Anthropomorphizing Non-Humans? People don't avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets' shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we'd be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn't evidence that the thing empathized with is actually conscious.
I think these arguments can be resisted, but they can't just be dismissed out of hand.
You also don't give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).
Here is a thought experiment. Suppose that explorers arrive in a previously unknown area of the Amazon, where a strange tribe exists. The tribe suffers from a rare genetic anomaly, whereby all of its individuals are physically and cognitively stuck at the age of 3.
They laugh and they cry. They love and they hate. But they have no capacity for complex planning, or normative sophistication. So they live their lives as young children do -- on a moment to moment basis -- and they have no hope for ever developing beyond that.
If the explorers took these gentle creatures and murdered them -- for science, for food, or for fun -- would we say, "Oh but those children are not so intelligent, so the violence is ok." Or would we be even more horrified by the violence, precisely because the children had no capacity to fend for themselves?
I would submit that the argument against animal exploitation is even stronger than the argument against violence in this thought experiment, because we could be quite confident that whatever awareness these children had, it was "less than" what a normal human has. We are comparing the same species after all, and presumably whatever the Amazonian children are missing, due to genetic anomaly, is not made up for in higher or richer awareness in other dimensions.
We cannot say that about other species. A dog may not be able to reason. But perhaps she delights in smells in a way that a less sensitive nose could never understand. Perhaps she enjoys food with a sophistication that a lesser palate cannot begin to grasp. Perhaps she feels loneliness with an intensity that a human being could never appreciate.
Richard Dawkins makes the very important point that cleverness, which we certainly have, gives us no reason to think that animal consciousness is any less rich or intense than human consciousness (http://directactioneverywhere.com/theliberationist/2013/7/18/g2givxwjippfa92qt9pgorvvheired). Indeed, since cleverness is, in a sense, an alternative mechanism for evolutionary survival to feelings (a perfect computational machine would need no feelings, as feelings are just a heuristic), there is a plausible case that clever animals should be given LESS consideration.
But all of this is really irrelevant. Because the basis of political equality, as Peter Singer has argued, has nothing to do with the facts of our experience. Someone who is born without the ability to feel pain does not somehow lose her rights because of that difference. Because equality is not a factual description, it is a normative demand -- namely, that every being who crosses the threshold of sentience, every being that could be said to HAVE a will -- ought be given the same respect and freedom that we ask for ourselves, as "willing" creatures.
My other comment was downvoted below the troll level, so I'll ask here. Suppose we found a morphine-like drug which effectively and provably wireheads chickens to be happy with their living conditions, and with no side effects for humans consuming the meat. Would that answer your arguments about suffering?
"Suppose we found a morphine-like drug which effectively and provably wireheads NON-WHITE PEOPLE to be happy with their living conditions, and with no side effects for WHITE PEOPLE consuming their flesh."
Has a different sort of emotional impact, no?
View more: Next
This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.
Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)
(Looking at the comments, Manfred makes a similar argument more vividly over here.)
My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.