Ishaan comments on Why Eat Less Meat? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Wait...what? Why not?
My morality is applicable to agents. The extent to which an object can be modeled as an agent plays a big role (but not the only role) in determining its moral weight. As such, there is a rough hierarchy:
nonliving things and single celled organisms < plants, oysters, etc < arthropods, worms, etc < fish, lizards < dumber animals (chickens, cows) < smarter animals (pigs, dogs, crows) < smartest animals (apes, elephants, cetaceans...)
Practically speaking from an animal rights perspective, this means that I would consider it a moral victory if meat eaters shifted a greater portion of their meat diet downwards towards "lower" animals like fish and arthropods, The difference in weight between much more and much less intelligent animals is rather extreme - it would kill several crickets, shrimp, herring, or salmon to replace a single pig, but I would still count that as a positive because I think that a pig's moral weight is magnitudes greater than a salmons. Convincing a person like me not to harm an object involves behavioral measures (with intelligence being one of several factors) which demonstrate the object as a certain kind of agent which is within the class of agents with positive moral weight.
I'm guessing that we're thinking of different things when we read "sapience is what makes suffering bad (or possible)". Do you think that my version of the thought doesn't feature in your ethical framework? If not, what does determine which objects are morally weighty?
For me, suffering is what makes suffering bad. Or, rather, I care about any entity that is capable of having feelings and experiences. And, for each of these entities, I much prefer them not to suffer. I care about not having them suffer for their sakes, of course, not for the sake of reducing suffering in the abstract. I don't view entities as utility receptacles.
But I don't think there's anything special about sapience, per se. Rather, I only think sapeince or agentiness is relevant in so far as more sapient and more agenty entities are more capable of suffering / happiness. Which seems plausible, but isn't certain.
~
This seems plausible to me from a perspective of "these animals likely are less capable of suffering", but I think you're missing two things in your analysis: ...(1) the degree of suffering required to create the food, which varies between species, and ...(2) the amount of food provided by each animal.
When you add these two things together, you get a suffering per kg approach that has some counterintuitive conclusions, like the bulk of suffering being in chicken or fish, though I think this table is desperately in need of some updating with more and better research (something that's been on my to-do list for awhile).
Additionally, when there is a burden of evidence to suggest that nutrient-equivalent food sources can be produced in a more energy-efficient manner and with no direct suffering to animals (indirect suffering being, for example, the unavoidable death of insects in crop harvesting), I believe it is a rational choice to move towards those methods.
Let's temporarily taboo words relating to inaccessible subjective experience, because the definitions of words like "suffering" haven't been made rigorous enough to talk about this - we could define it in concrete neurological terms or specific computations, or we could define it in abstract terms of agents and preferences, and we'd end up talking past each other due to different definitions.
I want to make sure to define morality such that it's not dependent on the particulars of the algorithm that an agent runs, but by the agent's actions. If we were to meet weird alien beings in the future who operated in completely alien ways, but who act in ways that can be defined as preferences and can engage in trade, reciprocal altruism, etc...then our morality should extend to them.
Similarly, I think our morality shouldn't extend to paperclippers - even if they make a "sad face" and run algorithms similar to human distress when a paperclip is destroyed, it doesn't mean the same thing morally.
So I think morality must necessarily be based on input-output functions, not on what happens in between. (at this point someone usually brings up paralyzed people - briefly, you can quantify the extent of additions/modifications necessary to create a functioning input-output agent from something and use that to extrapolate agency in such cases.)
Wait, didn't I take that into account with...
...or are you referring to a different concept?
I really do think the relationship between moral weight and intelligence is exponential - as in, I consider a human life to be weighted like ~10 chimps, ~100 dogs...(very rough numbers, just to illustrate the exponential nature)...and I'm not sure there are enough insects in the world to morally outweigh one human life (instrumental concerns about the environment and the intrinsic value of diverse ecosystems aside, of course). I'd wager the human hedons and health benefits from eating something very simple, like a shrimp or a large but unintelligent fish, might actually outweigh the cost to the fish and be a net positive (as it is with plants). My certainty in that matter is low, of course
I agree that people generally and I specifically need to understand "suffering" better. But I don't think substitutes like "runs an algorithm analogous to human distress" or "has thwarted preferences" offer anything better understood or well-defined.
I suppose when I think of suffering probably involves most of the following: noiception, a central nervous system (with connected nociceptors), endogenous opiods, a behavioral pain response, and a behavioral pain response affected by pain killers.
~
I think this is the clearest case where our moral theories differ. If the paperclipper suffers, I don't see any reason not to care about that experience. Or, rather, I don't fully understand why you lack care for the paperclipper.
Similarly, while I'm all for extending morality to weird aliens, I don't think trade nor reciprocal altruism per se are the precise qualities that make things count morally (for me). I assume you mean these qualities as a proxy for "high intelligence", though, rather than precise qualities?
~
Yes, you did. My bad for missing it. Sorry.
~
How does your uncertainty weigh in practically in this case? Would you, for example, refrain from eating fish while trying to learn more?
Point of disagreement: I do think that both of those are more well-defined than "suffering".
Additionally, I think this statement means you define suffering as "runs an algorithm analogous to human distress". All of these things are specific to Earth-evolved life forms. None of this applies to the class of agents in general.
(Also, nitpick - going by lay usage, you've outlined pain, not suffering. In my preferred usage, for humans at least pain is explicitly not morally relevant except insofar as it causes suffering.)
Rain-check on this...have some work to finish. Will reply properly later.
I don't think so, but I might be wrong...Is risk aversion in the face of uncertainty actually rational in this scenario? Seems to me that there are certain scenarios where risk aversion makes sense (personal finance, for example) and scenarios where it doesn't (effective altruism, for example) and this decision seems to fall in the latter camp. AFAIK, risk / loss aversion only applies where there are diminishing returns on the value of something.
I haven't seen any behavioral evidence of fish doing problem solving, being empathetic towards each other, exhibiting cognitive capacities beyond very basic associative learning & memory, or that sort of thing.
Practically, I eat things fish and lower guilt-free. I limit consumption of animals higher than fish to very occasional consumption only - in a similar vein to how I sometimes do things that are bad for the environment, or (when I start earning) plan to sometimes spend money on things that aren't charity, with the recognition that it's mildly immoral selfishness and I should keep it to a minimum. Basically, eating animals seems to be on par with all the other forms of everyday selfishness we all engage in...certainly something to be minimized, but not an abomination.
Where I do consume higher animals, I have plans in the future to shift that consumption towards unpopular cuts of meat (organs, bones, etc) because that means less negative impact through reduced wasteage (and also cheaper, which may enable upgrades with respect to buying from ethical farms + better nutritional profile). The bulk of the profit from slaughtering seems to be the popular muscle meat cuts - if meat eaters would be more holistic about eating the entire animal and not parts of it, I think there would be less total slaughter.
The trade-offs here are not primarily a taste thing for me - I just get really lethargic after eating grains, so I try to limit them. My strain of indian culture is vegetarian, so I am accustomed to eating less meat and more grain through childhood...but after I reduced my intake of grains I felt more energetic and the period of fogginess that I usually get after meals went away. I also have a family history of diabetes and metabolic disorders (which accelerate age-related declines in cognitive function, which I'm terrified of), and what nutrition research I've done indicates that shifting towards a more paleolithic diet (fruits, vegetables, nuts and meat) is the best way to avoid this. Cutting out both meat and grain makes eating really hard and sounds like a bad idea.
Just for the sake of completeness, I'll wait for you to follow-up on this before continuing our discussion here.
If the paper-clipper even can "suffer" ... I suspect a more useful word to describe the state of the paperclipper is "unclippy". Or maybe not...let's not think about these labels for now. The question is, regardless of the label, what is the underlying morally relevant feature?
I would hazard to guess that many of the supercomputers running our google searches, calculating best-fit molecular models, etc... have enough processing power to simulate a fish that behaves exactly like other fishes. If one wished, one could model these as agents with preference functions. But it doesn't mean anything to "torture" a google-search algorithm, whereas it does mean something to torture a fish, or to torture a simulation of a fish.
You could model something as simple as a light switch as an agent with a preference function but it would be a waste of time. In the case of an algorithm which finds solutions in a search space it is actually useful to model it as an agent who prefers to maximize some elements of a solution, as this allows you to predict its behavior without knowing details of how it works. But, just like the light switch, just because you are modelling it as an agent doesn't mean you have to respect its preferences.
"rational agent" explores the search space of possible actions it can take, and chooses the actions which maximize its preferences - the "correct solution" is when all preferences are maximized. An agent is fully rational if it made the best-possible choice given the data at hand. There are no rational agents, but it's useful to model things which act approximately in this way as agents.
Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones.
A human (or, hopefully one day a friendly AI) seeks to fulfill an extremely complex set of preference...as does a fish. They have preferences which carry moral weight.
It's not specific receptors or any particular algorithm that captures what is morally relevant to me about other agent's preferences. If you took a human and replaced its brain with a search algorithm which found the motor output solutions which maximized the original human's preferences, I'd consider this search algorithm to fit the definition of a person (though not necessarily the same person). I'd respect the search algorithm's preferences the same way I respected the preferences of the human it replaced. This new sort of person might instrumentally prefer not having its arms chopped off, or terminally prefer that you not read its diary, but it might not show any signs of pain when you did these things unless showing signs of pain was instrumentally valuable. Violation of this being's preferences may or may not be called "suffering" depending on how you define "suffering"...but either way, I think this being's preferences are just as morally relevant as a humans.
So the question I would turn back to you is...under what conditions could a paper clipper suffer? Do all paper clippers suffer? What does this mean for other sorts of solution-maximizing algorithms, like search engines and molecular modelers?
My case is essentially that it is something about the composition of an agent's preference function which contains the morally relevant component with regards to whether or not we should respect its preferences. The specific nature of the algorithm it uses to carry this preference function out - like whether it involves pain receptors or something - is not morally relevant.
Usually people speak of preferences when there is a possibility of choice -- the agent can meaningfully choose between doing A and doing B.
This is not the case with respect to molecular models, search engines, and light switches.
At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.
I don't think it is meaningful in the current context. The search engine is not an autonomous agent and doesn't choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print "Ha!" } else { print "Ooops!" }
The distinction you are making between the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice" sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you're a frequent poster here, so perhaps I've misunderstood your meaning. Are you using a specialized definition of the word "choice"?
I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don't think it is useful in the context of talking about morality.
Just as a data-point about intuition frequency, I found your intuitions about "a search algorithm which found the motor output solutions which maximized the original human's preference" to be very surprising
Do you mean that the idea itself is weird and surprising to consider?
Or do you mean that my intuition that this search algorithm fits the definition of a "person" and is imbued with moral weight is surprising and does not match your moral intuition?
Thanks for the well-thought out comment. It helps me think through the issue of suffering a lot more.
~
I think this is a good thought experiment and it does push me more toward preference satisfaction theories of well-being, which I have long been sympathetic to. I still don't know much myself about what I view as suffering. I'd like to read and think more on the issue -- I have bookmarked some of Brian Tomasik's essays to read (he's become more preference-focused recently) as well as an interview with Peter Singer where he explains why he's abandoned preference utilitarianism for something else. So I'm not sure I can answer your question yet.
There are interesting problems with desires, such as formalizing it (what is a desire and what makes a desire stronger or weaker, etc.), population ethics (do we care about creating new beings with preferences, etc.) and others that we would have to deal with as well.
~
So it seems like, to you, an entity's welfare matters when it has preferences, weighted based on the complexity of those preferences, with a certain zero threshold somewhere (so thermostat preferences don't count).
I don't think complexity is the key driver for me, but I can't tell you what is.
~
Likewise, I don't think this is much of a concern for me, and it seems inconsistent with the rest of what you've been saying.
Why are problem solving and empathy important? Surely I could imagine a non-empathetic program without the ability to solve most problems, that still has the kind of robust preferences you've been talking about.
And what level of empathy and problem solving are you looking for? Notably, fish engage in cleaning symbiosis (which seems to be in the lower-tier of the empathy skill tree) and Wikipedia seems to indicate (though perhaps unreliably) that fish have pretty good learning capabilities.
~
That makes sense to me.
No, it's not complexity, but content of the preferences that make a difference. Sorry for mentioning the complexity - i didn't mean to imply that it was the morally relevant feature.
I'm not yet sure what sort of preferences give an agent morally weighty status...the only thing I'm pretty sure about is that the morally relevant component is contained somewhere within the preferences, with intelligence as a possible mediating or enabling factor.
Here's one pattern I think I've identified:
I belong within reference Class X.
All beings in Reference Class X care about other beings in Reference Class X, when you extrapolate their volition.
When I hear about altruistic mice, it is evidence that the mouse's extrapolated volition would cause it to care about Class X-being's preferences to the extent that it can comprehend them. The cross-species altruism of dogs and dolphins and elephants is an especially strong indicator of Class X membership.
On the other hand, the within-colony altruism of bees (basically identical to Reference Class X except it only applies to members of the colony and I do not belong in it), or the swarms and symbiosis of fishes or bacterial gut flora, wouldn't count...being in Reference Class X is clearly not the factor behind the altruism in those cases.
...which sounds awfully like reciprocal altruism in practice, doesn't it? Except that, rather than looking at the actual act of reciprocation of altruism, I'd be extrapolating the agent's preferences for altruism. Perhaps Class X would be better named "Friendly", in the "Friendly AI" sense - all beings within the class are to some extent Friendly towards each other.
This is at the rough edge of my thinking though - the ideas as just stated are experimental and I don't have well defined notions about which preferences matter yet.
Edit: Another (very poorly thought out) trend which seems to emerge is that agents which have a certain sort of awareness are entitled to a sort of bodily autonomy ... because it seems immoral to sit around torturing insects if one has no instrumental reason to do so. (But is it immoral in the sense that there are a certain number of insects which morally outweigh a a human? Or is it immoral in a virtue ethic-y, "this behavior signals sadism" sort of way?)
My main point is that I'm mildly guessing that it's probably safe to narrow down the problem to some combination of preference functions and level of awareness. In any case, I'm almost certain that there exist preference functions are sufficient (but maybe not necessary?) to confer moral weight onto an agent...and though there may be other factors unrelated to preference or intelligence that play a role, preference function is the only thing with a concrete definition that I've identified so far.
Just so I understand you better, how would you compare and contrast this kind of pro-X "kin" altruism with utilitarianism?