peter_hurford comments on Why Eat Less Meat? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
As someone who agrees with (almost) everything you wrote above, I fear that you haven't seriously addressed what I take to be any of the best arguments against vegetarianism, which are:
Present Triviality. Becoming a vegetarian is at least a minor inconvenience — it restricts your social activities, forces you to devote extra resources to keeping yourself healthy, etc. If you're an Effective Altruist, then your time, money, and mental energy would be much better spent on directly impacting society than on changing your personal behavior. Even minor inconveniences and attention drains will be a net negative. So you should tell everyone else (outside of EA) to be a vegetarian, but not be one yourself.
Future Triviality. Meanwhile, almost all potential suffering and well-being lies in the distant future; that is, even if we have only a small chance of expanding to the stars, the aggregate value for that vast sum of life dwarfs that of the present. So we should invest everything we have into making it as likely as possible that humans and non-humans will thrive in the distant future, e.g., by making Friendly AI that values non-human suffering. Even minor distractions from that goal are a big net loss.
Experiential Suffering Needn't Correlate With Damage-Avoiding or Damage-Signaling Behavior. We have reason to think the two correlate in humans (or at least developed, cognitively normal humans) because we introspectively seem to suffer across a variety of neural and psychological states in our own lives. Since I remain a moral patient while changing dramatically over a lifetime, other humans, who differ from me little more than I differ from myself over time, must also be moral patients. But we lack any such evidence in the case of non-humans, especially non-humans with very different brains. For the same reason we can't be confident that four-month-old fetuses feel pain, we can't be confident that cows or chickens feel pain. Why is the inner experience of suffering causally indispensable for neurally mediated damage-avoiding behavior? If it isn't causally indispensable, then why think it is selected at all in non-sapients? Alternatively, what indispensable mechanism could it be an evolutionarily unsurprising byproduct of?
Something About Sapience Is What Makes Suffering Bad. (Or, alternatively: Something about sapience is what makes true suffering possible.) There are LessWrongers who subscribe to the view that suffering doesn't matter, unless accompanied by some higher cognitive function, like abstract thought, a concept of self, long-term preferences, or narratively structured memories — functions that are much less likely to exist in non-humans than ordinary suffering. So even if we grant that non-humans suffer, why think that it's bad in non-humans? Perhaps the reason is something that falls victim to...
Aren't You Just Anthropomorphizing Non-Humans? People don't avoid kicking their pets because they have sophisticated ethical or psychological theories that demand as much. They avoid kicking their pets because they anthropomorphize their pets, reflexively put themselves in their pets' shoes even though there is little scientific evidence that goldfish and cockatoos have a valenced inner life. (Plus being kind to pets is good signaling, and usually makes the pets more fun to be around.) If we built robots that looked and acted vaguely like humans, we'd be able to make humans empathize with those things too, just as they empathize with fictional characters. But this isn't evidence that the thing empathized with is actually conscious.
I think these arguments can be resisted, but they can't just be dismissed out of hand.
You also don't give what I think is the best argument in favor of vegetarianism, which is that vegetarianism does a better job of accounting for uncertainty in our understanding of normative ethics (does suffering matter?) and our understanding of non-human psychology (do non-humans suffer?).
You're right it might have been good to answer these in the core essay.
I disagree that being a vegetarian is an inconvenience. I haven't found my social activities restricted in any non-trivial way and being healthy has been just as easy/hard as when eating meat. It does not drain my attention from other EA activities.
~
I agree with this in principle, but again don't think vegetarianism is a stop from that. Certainly removing factory farming is a small win compared to successful star colonization, but I don't think there's much we can do now to ensure successful colonization, while there is stuff we can do now to ensure factory farming elimination.
~
It need not, which is what makes consciousness thorny. I don't think there is a tidy resolution to this problem. We'll have to take our best guess, and that involves thinking nonhuman animals suffer. We'd probably even want to err on the safe side, which would increase our consideration toward nonhuman animals. It would also be consistent with an Ocham's razor approach.
~
This doesn't feature among my ethical framework, at least. I don't know how this intuitively works for other people. I also don't think there's much I can say about it.
~
It's not. But there's other considerations and lines of evidence, so my worry that we're just anthropomorphizing is present, but rather low.
Existential risk reduction charities?
I'm very unsure about the expected success of existential risk reduction charities.
Wait...what? Why not?
My morality is applicable to agents. The extent to which an object can be modeled as an agent plays a big role (but not the only role) in determining its moral weight. As such, there is a rough hierarchy:
nonliving things and single celled organisms < plants, oysters, etc < arthropods, worms, etc < fish, lizards < dumber animals (chickens, cows) < smarter animals (pigs, dogs, crows) < smartest animals (apes, elephants, cetaceans...)
Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards "lower" animals like fish and arthropods, The difference in weight between much more and much less intelligent animals is rather extreme - it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig's moral weight is magnitudes greater than a salmons. Convincing a person like me not to harm an object involves behavioral measures (with intelligence being one of several factors) which demonstrate the object as a certain kind of agent which is within the class of agents with positive moral weight.
I'm guessing that we're thinking of different things when we read "sapience is what makes suffering bad (or possible)". Do you think that my version of the thought doesn't feature in your ethical framework? If not, what does determine which objects are morally weighty?
For me, suffering is what makes suffering bad. Or, rather, I care about any entity that is capable of having feelings and experiences. And, for each of these entities, I much prefer them not to suffer. I care about not having them suffer for their sakes, of course, not for the sake of reducing suffering in the abstract. I don't view entities as utility receptacles.
But I don't think there's anything special about sapience, per se. Rather, I only think sapeince or agentiness is relevant in so far as more sapient and more agenty entities are more capable of suffering / happiness. Which seems plausible, but isn't certain.
~
This seems plausible to me from a perspective of "these animals likely are less capable of suffering", but I think you're missing two things in your analysis: ...(1) the degree of suffering required to create the food, which varies between species, and ...(2) the amount of food provided by each animal.
When you add these two things together, you get a suffering per kg approach that has some counterintuitive conclusions, like the bulk of suffering being in chicken or fish, though I think this table is desperately in need of some updating with more and better research (something that's been on my to-do list for awhile).
Additionally, when there is a burden of evidence to suggest that nutrient-equivalent food sources can be produced in a more energy-efficient manner and with no direct suffering to animals (indirect suffering being, for example, the unavoidable death of insects in crop harvesting), I believe it is a rational choice to move towards those methods.
Let's temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like "suffering" haven't been made rigorous enough to talk about this - we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we'd end up talking past each other due to different definitions.
I want to make sure to define morality such that it's not dependent on the particulars of the algorithm that an agent runs, but by the agent's actions. If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them.
Similarly, I think our morality shouldn't extend to paperclippers - even if they make a "sad face" and run algorithms similar to human distress when a paperclip is destroyed, it doesn't mean the same thing morally.
So I think morality must necessarily be based on input-output functions, not on what happens in between. (at this point someone usually brings up paralyzed people - briefly, you can quantify the extent of additions/modifications necessary to create a functioning input-output agent from something and use that to extrapolate agency in such cases.)
Wait, didn't I take that into account with...
...or are you referring to a different concept?
I really do think the relationship between moral weight and intelligence is exponential - as in, I consider a human life to be weighted like ~10 chimps, ~100 dogs...(very rough numbers, just to illustrate the exponential nature)...and I'm not sure there are enough insects in the world to morally outweigh one human life (instrumental concerns about the environment and the intrinsic value of diverse ecosystems aside, of course). I'd wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course
I agree that people generally and I specifically need to understand "suffering" better. But I don't think substitutes like "runs an algorithm analogous to human distress" or "has thwarted preferences" offer anything better understood or well-defined.
I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.
~
I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don't see any reason not to care about that experience. Or, rather, I don't fully understand why you lack care for the paperclipper.
Similarly, while I'm all for extending morality to weird aliens, I don't think trade nor reciprocal altruism per se are the precise qualities that make things count morally (for me). I assume you mean these qualities as a proxy for "high intelligence", though, rather than precise qualities?
~
Yes, you did. My bad for missing it. Sorry.
~
How does your uncertainty weigh in practically in this case? Would you, for example, refrain from eating fish while trying to learn more?
Point of disagreement: I do think that both of those are more well-defined than "suffering".
Additionally, I think this statement means you define suffering as "runs an algorithm analogous to human distress". All of these things are specific to Earth-evolved life forms. None of this applies to the class of agents in general.
(Also, nitpick - going by lay usage, you've outlined pain, not suffering. In my preferred usage, for humans at least pain is explicitly not morally relevant except insofar as it causes suffering.)
Rain-check on this...have some work to finish. Will reply properly later.
I don't think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn't (effective altruism, for example) and this decision seems to fall in the latter camp. AFAIK, risk / loss aversion only applies where there are diminishing returns on the value of something.
I haven't seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.
Practically, I eat things fish and lower guilt-free. I limit consumption of animals higher than fish to very occasional consumption only - in a similar vein to how I sometimes do things that are bad for the environment, or (when I start earning) plan to sometimes spend money on things that aren't charity, with the recognition that it's mildly immoral selfishness and I should keep it to a minimum. Basically, eating animals seems to be on par with all the other forms of everyday selfishness we all engage in...certainly something to be minimized, but not an abomination.
Where I do consume higher animals, I have plans in the future to shift that consumption towards unpopular cuts of meat (organs, bones, etc) because that means less negative impact through reduced wasteage (and also cheaper, which may enable upgrades with respect to buying from ethical farms + better nutritional profile). The bulk of the profit from slaughtering seems to be the popular muscle meat cuts - if meat eaters would be more holistic about eating the entire animal and not parts of it, I think there would be less total slaughter.
The trade-offs here are not primarily a taste thing for me - I just get really lethargic after eating grains, so I try to limit them. My strain of indian culture is vegetarian, so I am accustomed to eating less meat and more grain through childhood...but after I reduced my intake of grains I felt more energetic and the period of fogginess that I usually get after meals went away. I also have a family history of diabetes and metabolic disorders (which accelerate age-related declines in cognitive function, which I'm terrified of), and what nutrition research I've done indicates that shifting towards a more paleolithic diet (fruits, vegetables, nuts and meat) is the best way to avoid this. Cutting out both meat and grain makes eating really hard and sounds like a bad idea.
Just for the sake of completeness, I'll wait for you to follow-up on this before continuing our discussion here.
If the paper-clipper even can "suffer" ... I suspect a more useful word to describe the state of the paperclipper is "unclippy". Or maybe not...let's not think about these labels for now. The question is, regardless of the label, what is the underlying morally relevant feature?
I would hazard to guess that many of the supercomputers running our google searches, calculating best-fit molecular models, etc... have enough processing power to simulate a fish that behaves exactly like other fishes. If one wished, one could model these as agents with preference functions. But it doesn't mean anything to "torture" a google-search algorithm, whereas it does mean something to torture a fish, or to torture a simulation of a fish.
You could model something as simple as a light switch as an agent with a preference function but it would be a waste of time. In the case of an algorithm which finds solutions in a search space it is actually useful to model it as an agent who prefers to maximize some elements of a solution, as this allows you to predict its behavior without knowing details of how it works. But, just like the light switch, just because you are modelling it as an agent doesn't mean you have to respect its preferences.
"rational agent" explores the search space of possible actions it can take, and chooses the actions which maximize its preferences - the "correct solution" is when all preferences are maximized. An agent is fully rational if it made the best-possible choice given the data at hand. There are no rational agents, but it's useful to model things which act approximately in this way as agents.
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones.
A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.
It's not specific receptors or any particular algorithm that captures what is morally relevant to me about other agent's preferences. If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human's preferences, I'd consider this search algorithm to fit the definition of a person (though not necessarily the same person). I'd respect the search algorithm's preferences the same way I respected the preferences of the human it replaced. This new sort of person might instrumentally prefer not having its arms chopped off, or terminally prefer that you not read its diary, but it might not show any signs of pain when you did these things unless showing signs of pain was instrumentally valuable. Violation of this being's preferences may or may not be called "suffering" depending on how you define "suffering"...but either way, I think this being's preferences are just as morally relevant as a humans.
So the question I would turn back to you is...under what conditions could a paper clipper suffer? Do all paper clippers suffer? What does this mean for other sorts of solution-maximizing algorithms, like search engines and molecular modelers?
My case is essentially that it is something about the composition of an agent's preference function which contains the morally relevant component with regards to whether or not we should respect its preferences. The specific nature of the algorithm it uses to carry this preference function out - like whether it involves pain receptors or something - is not morally relevant.
Usually people speak of preferences when there is a possibility of choice -- the agent can meaningfully choose between doing A and doing B.
This is not the case with respect to molecular models, search engines, and light switches.
Just as a data-point about intuition frequency, I found your intuitions about "a search algorithm which found the motor output solutions which maximized the original human's preference" to be very surprising
Thanks for the well-thought out comment. It helps me think through the issue of suffering a lot more.
~
I think this is a good thought experiment and it does push me more toward preference satisfaction theories of well-being, which I have long been sympathetic to. I still don't know much myself about what I view as suffering. I'd like to read and think more on the issue -- I have bookmarked some of Brian Tomasik's essays to read (he's become more preference-focused recently) as well as an interview with Peter Singer where he explains why he's abandoned preference utilitarianism for something else. So I'm not sure I can answer your question yet.
There are interesting problems with desires, such as formalizing it (what is a desire and what makes a desire stronger or weaker, etc.), population ethics (do we care about creating new beings with preferences, etc.) and others that we would have to deal with as well.
~
So it seems like, to you, an entity's welfare matters when it has preferences, weighted based on the complexity of those preferences, with a certain zero threshold somewhere (so thermostat preferences don't count).
I don't think complexity is the key driver for me, but I can't tell you what is.
~
Likewise, I don't think this is much of a concern for me, and it seems inconsistent with the rest of what you've been saying.
Why are problem solving and empathy important? Surely I could imagine a non-empathetic program without the ability to solve most problems, that still has the kind of robust preferences you've been talking about.
And what level of empathy and problem solving are you looking for? Notably, fish engage in cleaning symbiosis (which seems to be in the lower-tier of the empathy skill tree) and Wikipedia seems to indicate (though perhaps unreliably) that fish have pretty good learning capabilities.
~
That makes sense to me.