Qiaochu_Yuan comments on Effective Altruism Through Advertising Vegetarianism? - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (551)
I asked this before but don't remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?
Huh. I'm drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said "human suffering is more important", not "there are some classes of animals that suffer less". I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.
I've interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I'm somewhat confident that there's no big difference in average between the ways they suffer . I'm nowhere near as confident about fish.
I already addressed that uncertainty in my comment:
To elaborate: it's perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says "human suffering is more important" isn't saying that: they're saying that they wouldn't care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It's saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
Even less so about silverfish, despite its complex mating rituals.
Because one of those species is mine?
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature.
I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.
Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.
I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing.
That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.
I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.
I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.)
I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
Nope.
I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)
I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't.
A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards.
I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.)
On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an explanatory IOU until we know exactly what the neural basis of 'consciousness' is, but 'intelligent' and 'able to participate in human society' are IOUs in the same sense.) Likewise for gods and dead bodies -- the former don't exist, and the latter again fail very general criteria like 'is it conscious?' and 'can it suffer?' and 'can it desire?'. These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological 'humanity' and 'inhumanity' are significant, and that makes it dangerous to adopt a policy of 'assume everything with a weird appearance or behavior has no moral rights until we've conclusively proved that its difference from us is only skin-deep'.
I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:
Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was:
People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).
So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?
Who knows what kind of things a real witch could do to an executioner, for that matter?
There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.
That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".)
As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.
Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.
2 was based on a Bible quote, I think. The state hanged witches.
No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.
Ok, that seems workable to a first approximation.
So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view):
Do you think that animals can "want to get out of the state they're in"?
Yes?
This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)
Such as human masochists.
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty.
I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/.
Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else?
I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.
That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like:
"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also."
or...
"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also."
I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2.
But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
On why the suffering of one species would be more important than the suffering of another:
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
I feel psychologically similar to humans of different races and genders but I don't feel psychologically similar to members of most different species.
Uh, no. System 1 doesn't know what a species is; that's just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can't, not really.
This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can't Say?
I should add to this that even if I endorse what you call "prejudice against prejudice" here -- that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence -- it doesn't follow that because racists or sexists can use a particular argument A as a line of defense, there's therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don't want to be the sort of person that would have been racist or sexist in previous centuries. If you don't share that premise, there is no way for me to show that you're being inconsistent -- I acknowledge that.
I should probably clarify - when I said that valuing humans over animals strikes me as arbitrary, I'm saying that it's arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that's not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person's belief in that position, regardless of whether that effect is "logical".)
I've been meaning to write a post about how I think it's a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.
(You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.)
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I'm not sure if it's that problematic as long as you keep in mind that "axioms" is really just shorthand for something like "moral subprograms" or "moral dynamics".
I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish - in other words, unless your argument builds on things that the mind's decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind's preferences.
I'm not really sure of what you mean here. For one, I didn't say that my moral framework can't distinguish humans and non-humans - I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people's feelings of safety, which would contribute to the creation of much more suffering than killing animals would.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I'd program into an AI.
Well, I don't think I should care what I care about. The important thing is what's right, and my emotions are only relevant to the extent that they communicate facts about what's right. What's right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn't hold too much import, on pain of moral wireheading/acceptance of a fake utility function.
(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that's available in practice, but that doesn't mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and "in deciding what to do, don't pay attention to what you want" isn't very useful advice. (It also makes any kind of instrumental rationality impossible.)
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn't mean that you expect them to be accurate, they are just the best you have available in practice.
Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
I'm roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:
1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction - if someone switches to an exploitation phase "too early", then over time, their values may actually shift over to what the person thought they were.
2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don't match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn't use terminology like exploration/exploitation that implies that it would be just one of those.
I'm not a very well educated person in this field, but if I may:
I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They're no more enemies than one's preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human--that is, some things are important only because I'm a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one's mind, even when one KNOWS it is wrong, can be a source of pain, I've found--hypocrisy and indecision are not my friends.
Hope I didn't make a mess of things with this comment.
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you're uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.
I'm not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).
I'm a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?
I don't mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.
I'm not disagreeing that animals suffer. I'm telling you that I don't care whether they suffer.
You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What's so special about the particular category you picked?
The psychological unity of humankind. See also this comment.
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
I'm willing to entertain this possibility. I've recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don't care about fish or chickens. I don't think I can have a meaningful relationship with a fish or a chicken even in principle.
Doesn't follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There's no law of ethics saying that the parameter space has to be small.
It's not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn't stand up to such a metric, but -- although I can't speak for Qiaochu -- that's a bullet I'm willing to bite.
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person's modus ponens is etc.
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer's, etc.] never mind.
:-)
(I could steelman my yesterday self by noticing that even though small children aren't similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
Every human I know cares at least somewhat about animal suffering. We don't like seeing chickens endlessly and horrifically tortured -- and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won't disturb our peace of mind. I'll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
Are you certain you don't care?
Are you certain that you won't end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn't care at all about black people (but would regret and abandon this apathy if they knew all the facts)?
If you feel there's any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don't care about is a much smaller cost than learning 20 years from now you're the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I don't feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
No, or else I wouldn't be asking for arguments.
This is a good point.
I don't either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens' psychological alienness to me will seem a difference of degree more than of kind. It's a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.
Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn't be tortured. (And not just because I don't want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don't appear to exhaust it..)
That's not a good point, that's a variety of Pascal's Mugging: you're suggesting that the fact that the possible consequence is large ("I tortured beings" is a really negative thing) means that even fi the chance is small, you should act on that basis.
It's not a variant of Pascal's Mugging, because the chances aren't vanishingly small and the payoff isn't nearly infinite.
I don't believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid "gateway torture" complications.)
I don't want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don't have moral valence (in another comment I gave the example of seeing corpses get raped).
I might also be willing to assign dolphins and monkeys moral value (I haven't made up my mind about this), but not most animals.
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?
Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
Two Girls One Cup?
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it's also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.
So I'm reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don't.
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between "watch that video, no animal was harmed" versus "watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))", which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn't be. Just checking.)
What?
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 * 10^16 joules, which can be converted into 1.13 * 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
...plus a constant.
Reminds me of ... Note the name of the website. She doesn't look happy! "I am altering the deal. Pray I don't alter it any further."
Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active ...
"squid" is slang for a GBP, i.e. Pound Sterling, although I'm more used to hearing the similar "quid." One hundred of them can be referred to as a "biscuit," apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a "benjamin."
That is, what are TheOtherDave's preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they're paid some cash.
"Quid" is slang, "squid" is a commonly used jokey soundalike. There's a joke that ends "here's that sick squid I owe you".
EDIT: also, never heard "biscuit" = £100 before; that's a "ton".
Does Cockney rhyming slang not count as slang?
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba's part that serves no real purpose.
Nested parentheses are their own reward, perhaps?
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question.
Well, to the extent that my noncommital response can be considered an answer to any question at all.
So to be clear - you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn't be able to tell which was which if you hadn't told me. You don't pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it's fake) or watching the real-harm video and receiving £100.
If the reward is £100, I'll take the £100; if it's an actual biscuit, I prefer to watch the fake-harm video.
I'm genuinely unsure, not least because of your perplexing unpacking of "biscuit".
Both examples are unpleasant; I don't have a reliable intuition as to which is more so if indeed either is.
I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I'm motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I'm again genuinely unsure of. In the real world I usually assume that when I'm not sure it's the latter, but this is such a contrived scenario that I'm not confident of that either.
If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn't.
I'll chime in to comment that QiaochuYuan's[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his "human" I would substitute something like "sapient, self-aware beings of approximately human-level intelligence and above" and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses "approximately human" to mean roughly this).
So, please reconsider your disbelief.
[1] Sorry, the board software is doing weird things when I put in underscores...
So, presumably you don't keep a pet, and if you did, you would not care for its well-being?
Indeed, I have no pets.
If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don't think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
Do you consider young children and very low-intelligence people to be morally-relevant?
(If - in the case of children - you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
Good question. Short answer: no.
Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns... where exactly? I'm not sure.
Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don't know whether such humans exist (most people with Down syndrome don't quite seem to fit that criterion, for instance).
There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can't? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that's good or bad is a separate question).
So: it's complicated. But to answer practical questions: I don't consider infanticide the moral equivalent of murder (although it's reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.
I hope that answers your question; if not, I'll be happy to elaborate further.
Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?
Nope (can't parse them as approximately human without revulsion). Nope (approximately human).
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts ("becomes one with the force"). Then the flesh would remain corporeal for consumption.
The real ethical test would be would I freeze yoda's head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I'd choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan's footsteps, and has good reason to believe that he will be able to do so.
Sith philosophy, for reference:
Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
If you're lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.
I wouldn't eat flies or squids either. But I know that that's a cultural construct.
Let's ask another question: would I care if someone else eats Yoda?
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that's why he ate Yoda, and Yoda's will granted permission for this), then no, I wouldn't care if someone else eats Yoda.
In practice? In common Yoda-eating practice? Something about down to earth 'in practice' empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps "would be, presumably, correlated with".
In Yoda's case he could even have just asked for permission from Yoda's force ghost. Jedi add a whole new level of meaning to "Living Will".
"In practice" doesn't mean "this is practiced", it means "given that this is done, what things are, with high probability, associated with it in real-life situations" (or in this case, real-life-+-Yoda situations). "In practice" can apply to rare or unique events.
I really don't think statements of the form "X is, in practice, correlated with Y" should apply to situations where X has literally never occurred. You might want to say "I expect that X would, in practice, be correlated with Y" instead.
All events have never occurred if you describe them with enough specificity; I've never eaten this exact sandwich on this exact day.
While nobody has eaten Yoda before, there have been instances where people have eaten beings that could talk intelligently.
I share Qiaochu's reasoning.
I found it interesting to compare "this is the price at which we could buy animals not existing" to the "this is the price people are willing to pay for animals to exist so they can eat them," because it looks like the second is larger, often by orders of magnitude. (This shouldn't be that surprising for persuasion; if you can get other people to spend their own resources, your costs are much lower.)
It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be 'factory farmed' in the same way. [Edit: It appears that conditions for fish on fish farms are actually pretty bad, to the point that many species of fish cannot survive modern farming techniques. So, no comment on the relative badness.]
From what I know, fish farming doesn't sound pleasant, though perhaps it's not nearly as bad as chicken farming.
If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you're pretty ignorant of factory farming. Maybe you haven't seen enough propaganda?
Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there? Indeed, your post claimed to be about days of life.
Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it's just that the propaganda works on them.
I never said they were in the same ballpark. Just that fish farming is also something I don't like.
~
Yes, I do.
~
I agree that might not make much sense for fish, except in so far as farming causes more fish to be birthed than otherwise would.
~
I think this is a bias that is present in any kind of person that cares about advocating for or against a cause.
Here's a gruesome video on the whole fish thing if you're in to gruesome videos.
Well, they can move more, but on the other hand they tend to pollute each others' environment in a way that terrestrial farmed animals do not, meaning that not all commercially fished species can survive being farmed with modern techniques, and those which can are not necessarily safe for humans to eat in the same quantities.
I am a moral anti-realist, so I don't think there's any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn't, or you would think the reaction irrational? I don't know.
However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.
~
One can get both protein and deliciousness from non-meat sources.
~
I'm not sure. I don't think there's a way I could make that transaction work.
Some interesting things about this example:
Distance seems to have a huge impact when it comes to the bystander effect, and it's not clear that it's irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.
Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).
Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they're allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.
Well, and what would you say to someone who thought that?
I don't know. It doesn't feel like I do. You could try to convince me that I do even if you're a moral anti-realist. It's plausible I just haven't spent enough time around animals.
Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person's character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it's not because I value corpses.
My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.
Really? This can't be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
Right now, I don't know. I feel like it would be playing a losing game. What would you say?
I'm not sure how I would do that. Would you kick a puppy? If not, why not?
How could I verify that you actually refrain from eating meat?
I would probably say something like "you just haven't spent enough time around them. They're less different from you than you think. Get to know them, and you might come to see them as not much different from the people you're more familiar with." In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I'm not sure how I would go about getting to know a pig.
No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result.
Oh, that's what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn't convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
"No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result."
All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather a) give up meat for a month b) beat a puppy to death
I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn't like to see chickens suffering), but the contrast is quite telling.
For what it's worth, I also wouldn't treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don't have to watch. There's quite a contrast there, as well, but it seems to have little to do with suffering.
That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered.
Probably also to watching it suffer while not being slaughtered.
Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
Yeah, that's the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn't provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
Sorry, I was unclear: I meant moral (and political) arguments from other people - moral rhetoric if you like - often takes that form.
I don't think that "not enjoying killing a chicken" should be described as an "intuition". Moral intuitions generally take the form of "it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc." What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can't be "true" or "false", they're just facts about your mental makeup. (It may make sense to describe a preference as "invalid" in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance "I think killing a chicken is morally ok" (a moral intuition) and "I don't like killing chickens" (a preference) do not conflict with each other any more than "I think homosexuality is ok" and "I am heterosexual" conflict with each other, or "Being a plumber is ok (and in fact plumbers are necessary members of society)" and "I don't like looking inside my plumbing".
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: "This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?" To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the "agh I don't want to do/watch this!" signal.
The dividing lines between the kinds of cognitive states I'm inclined to call "moral intuitions" and the kinds of cognitive states I'm inclined to call "preferences" and the kinds of cognitive states I'm inclined to call "psychic distress" are not nearly as sharp, in my experience, as you seem to imply here. There's a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don't fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
That doesn't necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
This also applies to foreigners, though.
Well, it also applies to blood relatives, for that matter.
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then 'they're human too' must be a stand-in for some other feature that's really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo 'human' to figure out what the actual relevant concept is, since it's not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn't seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something ... so is there a cutoff point or what?
(I think you're onto something with "intelligence", but since intelligence varies, shouldn't how much we care vary too? Shouldn't there be some sort of sliding scale?)
That's a very good question.
I don't know.
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don't think there's really a firm cutoff point, such that one side is "worthless" and the other side is "worthy". It's a bit like a painting.
At one time, there's a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there's a painting. But there isn't one particular moment, one particular stroke of the brush, when it goes from "not-a-painting" to "painting". Similarly for intelligence; there isn't any particular moment when it switches automatically from "worthless" to "worthy".
If I'm going to eat meat, I have to find the point at which I'm willing to eat it by some other means than administering I.Q. tests (especially as, when I'm in the supermarket deciding whether or not to purchase a steak, it's a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I'm going to continue to use 'species' as my proxy measurement.
See Arneson's What, if anything, renders all humans morally Equal?
edit: can't get the syntax to work, but here's the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf
So what do you think of 'sapient' as a taboo for 'human'? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I'm willing to bite the bullet on that so long as we're willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant's claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven't learned any language, all have absolutely no moral significance? That we could (as long as it didn't damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don't see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
I'm not committed to this, or anything close. What I'm committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn't intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
I donno, I didn't claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
If that was the case there would be no one to do the discussing.
Well, we could discuss that world from this one.
Yes, and we could, for example, assign that world no moral significance relative to our world.
"Sapience" is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don't think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I'm happy to call those animals sapient. What's clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we're language-users. Chimps aren't.
Sorry, still not crisp. If you're using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It's a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you've predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
Are you seriously suggesting that the difference between someone you can understand and someone you can't matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
The goal of defining 'human' (and/or 'sapient') here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If "language use and sensation" end up only being necessary or sufficient for concepts of 'human' that aren't plausible candidates for the original 'non-humans aren't moral patients' claim, then they aren't relevant. The goal here isn't to come up with the one true definition of 'human', just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Well, you'd be at a loss because you either wouldn't exist or wouldn't be able to linguistically express anything. But we can still adopt an outsider's perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it's perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn't begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I'm defending, or trying to steel-man, the claim that the fact that a human's suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal's suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the 'infinite torture' objection doesn't necessarily land.
We can discuss that world from this one.
You seem to be using 'anthropocentric' to mean 'humans are the ultimate arbiters or sources of morality'. I'm using 'anthropocentric' instead to mean 'only human experiences matter'. Then by definition it doesn't matter whether non-humans are tortured, except insofar as this also diminishes humans' welfare. This is the definition that seems relevant Qiaochu's statement, "I am still not convinced that I should care about animal suffering." The question isn't why we should care; it's whether we should care at all.
I don't think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu's question.
No, the latter was an afterthought. The discussion begins here.
The relevant sense of changing values is change of someone else's purposeful behavior. The philosophical classification of your views doesn't seem like useful evidence about that possibility.
I don't understand what that means for my situation, though. How am I supposed to argue him out of his current values?
I mean, it's certainly possible to change someone's values through anti-realist argumentation. My values were changed in that way several times. But I don't know how to do it.
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
If moral realism were true, there would be a very obvious path to arguing someone out of their values -- argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.
Moral anti-realism certainly complicates things.
You may want to take a look at this brief list of relevant writings I compiled in response to a comment by SaidAchmiz.
There are decent arguments (e.g. this) for eating less meat even if you don't care about non-human animals as a terminal value.
I don't think there's a subthread here about posthumans here yet, which surprises me. Most of the other points I'd think to make have been made by others.
Several times you specify that you care about humanity, because you are able to have relationships with humans. A few questions:
1) SaidAchmiz, whose views seem similar to yours, specified they hadn't owned pets. Have you owned pets?
While this may vary from person to person, it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).
I've also recently made a friend with two pet turtles. One of the turtles seems pretty bland and unresponsive, but the other seems incredibly interested in interaction. I expect that some amount of the perceived relationship between my friend and their turtle is human projection, but I've still updated quite a bit on the relative potential-sentience of turtles. (Though my friend's veterinarian did said the turtle is an outlier in terms of how much personality a turtle expresses)
2) You've noted that you don't care about babyeaters. Do you care about potential posthumans who share all values you currently have, but have new values you don't care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can't understand? Do you expect those humans to care about you?
I'm not sure how good an argument it is that "we should care about things dumber than us because we'd want smarter things to care about us", in the context of aliens who might not share our values at all. But it seems at least a little relevant, when specifically concerning the possibility of trans-or-posthumans.
3) To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they're jerks, or don't share enough interests with you), do you consider them to have less moral worth? If not, why not?
Intellectually, I'm interested in the question: what moral framework would Extrapolatedd-Qiaochu-Yuan endorse (since, again, I'm an anti-realist).
I had fish once, but no complicated pets.
People are also able to form relationships of this kind with, say, ELIZA or virtual pets in video games or waifus. This is an argument in favor of morally valuing animals, but I think it's a weak one without more detail about the nature of these relationships and how closely they approximate full human relationships.
Depends. If they can understand me well enough to have a relationship with me analogous to the relationship an adult human might have with a small child, then sure.
I hid a lot of complexity in "in principle." This objection also applies to humans who are in comas, for example, but a person being in a coma or not sharing my interests is a contingent fact, and I don't think contingent facts should affect what beings have moral worth. I can imagine possible worlds reasonably close to the actual one in which a person isn't in a coma or does share my interests, but I can't imagine possible worlds reasonably close to the actual one in which a fish is complicated enough for me to have a meaningful relationship with.
YMMV, but the argument that did it for me was Mylan Engel, Jr's argument, as summarized and nicely presented here.
On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.
Looking over that argument, in the second link, I notice that those same premises would appear to support the conclusion that the most morally correct action possible would be to find some way to sterilize every vertabrate (possibly through some sort of genetically engineered virus). If there is no next generation - of anything, from horses to cows to tigers to humans to chickens - then there will be no pain and suffering experienced by that next generation. The same premises would also appear to support the conclusion that, having sterilised every vertabrate on the planet, the next thing to do is to find some painless way of killing every vertebrate on the planet, lest they suffer a moment of unnecessary pain or suffering.
I find both of these potential conclusions repugnant; I recognise this as a mental safety net, warning me that I will likely regret actions taken in support of these conclusions in the long term.
This is an argument for vegetarianism, not for caring about animal suffering: many parts of this argument have nothing to do with animal suffering but are arguments that humans would be better off if we ate less meat, which I'm also willing to entertain (since I do care about human suffering), but I was really asking about animal suffering.
$18 a year is way too low.
I'm less willing to entertain said arguments seeing as how they come from people who are likely to have their bottom lines already written.
I'm not offering a higher price since it seems cost ineffective compared to other opportunities, but I'm curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)
In the neighborhood of $1,000.
I started reading the argument (in your second link), racked up a full hand of premises I disagreed with or found to be incoherent or terribly ill-defined before getting to so much as #10, and stopped reading.
Then I decided that no, I really should examine any argument that convinced an intelligent opponent, and read through the whole thing (though I only skimmed the objections, as they are laughably weak compared to the real ones).
Turns out my first reaction was right: this is a silly argument. Engel lists a number of premises, most of which I disagree with, launches into a tangent about environmental impact, and then considers objections that read like the halfhearted flailings of someone who's already accepted his ironclad reasoning. As for this:
It makes me want to post the "WAT" duck in response. Like, is he serious? Or is this actually a case of carefully executed trolling? I begin to suspect the latter...
Edit: Oh, and as Qiaochu_Yuan says, the argument assumes that we care about animal suffering, and so does not satisfy the request in the grandparent.
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It's that robustness of the argument, drawing more on many weak points than one strong one, that convinced me.
I don't understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn't simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that's a prime example of a moment to notice the fact that you're confused. It's possible you were reading him as saying something he wasn't saying.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn't make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain properties; so if something else has the same properties, to be consistent you should care about it also. (Obviously this depends on what properties you pick.)
That's possible, but I don't think that's the case. But let me address the argument in a bit more detail and perhaps we'll see if I am indeed misunderstanding something.
First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It's not like they're independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.
And I'm not sure what sort of logic you're using wherein you believe p1 with low probability, p2 with low probability, p3 ... etc., and their disjunction ends up being true. (Really, that wasn't sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just "probably wrong" or anything so prosaic.
The quoted paragraph:
You're right, I guess I have no idea what he's saying here, because this seems to me blatantly absurd on its face. If you're interested in truth, of course you're going to reject those beliefs most likely to be false. That's exactly what you're going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don't think any of your beliefs are false, or else you'd reject them. If you find that some of your beliefs are false, you will want to reject them, because if you're interested in truth then you want to hold zero false beliefs.
I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and... I just... I don't know what to say.
And when I read your commentary on the above, I get the same "... what the heck? Is he... is he serious?" reaction.
What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.
... and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating... anything but absurd? Isn't that one of the Great Epistemological Sins that Less Wrong warns us about?
As for the duck comment... professional philosophers troll people all the time. Having never encountered Engel's writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.
Engel apparently claims that his reader already holds these beliefs, among others:
(And without that, the argument falls down.)
(Hi, sorry for the delayed response. I've been gone.)
Just the standard stuff you'd get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you're moderately disposed to reject every statement, you're weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You're right, of course, that Engel's premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
OK, yes, you've expressed yourself well and it's clear that you're intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
"As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)"
If you're interested in reconsidering Engel's argument given his intended interpretation of it, I'd like to hear your updated reasons for/against it.
Welcome back.
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe "We ought to take steps to make the world a better place" with P = 0.3? Like, maybe we should and maybe we shouldn't? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude... something. What, exactly? It's not clear).
And yet, I can't help but notice that Engel takes an approach that's not exactly either of the above. He says:
I don't know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don't rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: "Uh, really? Why...?")
As for your restatement of Engel's argument... First of all, I've reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he's saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be... rather off. To wit:
Well, here's the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it's possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you're aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
Why do you character the quoted belief as "motivated"? We are assuming, I thought, that I've arrived at said belief by the same process as I arrive at any other beliefs. If that one's motivated, well, it's presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel's claim that "accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part" seems the height of silliness. Frankly, I'm not sure what could make someone say that but a case of writing one's bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency's sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel's argument works in theory, let's put it to the test on his actual claims, yes?
I'd be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
As far as I'm aware, that's exactly how logical arguments work, formally. See the second paragraph here.
Meat tastes good and is a great source of calories and nutrients. That's powerful motivation for bodies like us. But you can strike that word if you prefer.
We aren't. We're requiring only and exactly that it not be singled out for immunity to consistency-checking.
That's it! That's exactly the structure of Engel's argument, and what he was trying to get people to do. :)
That is well and good, except that "making the world a better place" seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. "Whether a proposition would follow from a moral theory" is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-'o-premise, and see how much premise you end up with; if it's a lot of premise, the conclusion magically appears. The claim that it doesn't even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
Alright then. To the object level!
Let's see...
Depends on how "pain" and "suffering" are defined. If you define "suffering" to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and "pain" likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in "suffering", then first of all, I disagree with your use of the word "suffering" to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
See (p1).
If by "cruelty" you mean ... etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
Depends on the steps. If by this you mean "any steps", then no. If by this you mean "this is a worthy goal, and we should find appropriate steps to achieve and take said steps", then sure. We'll count this one as a "yes". (Of course we might differ on what constitutes a "better" world, but let's assume away such disputes for now.)
Agreed.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don't think that "morally good person" is a terribly useful concept except as shorthand. We'll count this one as a "no".
Pursuant to the caveats outlined in my responses to all of the above propositions... sure. Said caveats partially neuter the statement for Engel's purposes, but for generosity's sake let's call this a "yes".
See response to (p5); this is not very meaningful. So, no.
Yep.
I try not to think of myself in terms of "what sort of person" I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4'). But let's call this a "yes".
This seems relatively uncontroversial.
Nope. (And see (p1) re: "suffering".)
Nope.
Whether we "ought to" do this depends on circumstances, but this is certainly not inherently true in a moral sense.
Nope.
I'll agree with this to a reasonable extent.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience... it seems I agree with 7 of the 17 propositions listed. Engel then says:
So according to this, it seems that I should have a... moderate commitment to the immorality of eating meat? But here's the problem:
How does the proposition "eating meat is immoral" actually follow from the propositions I assented to? Engel claims that it does, but you can't just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There's nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
My usual reply to a claim that a philosophical statement is "proven formally" is to ask for a computer program calculating the conclusion from the premises, in the claimant's language of choice, be is C or Coq.