I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about.

http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational-gap/

New Comment
77 comments, sorted by Click to highlight new comments since: Today at 10:25 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In the comments of his post, RobBB claimed that by going vegetarian, you'll cause 1500 fewer animals to be killed than otherwise. Regardless of the exact number, it strikes me that this is a highly tendentious way of putting the issue. It would surely be more accurate to say that by going vegetarian, you will caused 1500 fewer animals to be born than otherwise.

It is wrong to call a utilitarian argument for vegetarianism "air-tight" when it doesn't even consider this point.

I don't see why you get downvoted.

I am strongly convinced by arguments for vegetarianism.

I mean, I still eat meat but that's just because of my moral decrepitude.

It is likely easier than you think to cut out meat and other animal products from your diet. When I went vegan, it basically involved changing from one set of tasty dishes to another, and I don't think I lost out much from a taste perspective (that being said, I did this at the same time as moving from catered university accommodation, so possibly YMMV). Here is a website which purports to give you all the knowledge you need to make the transition. This is something that you can start doing today, and I urge you to do so.

9deskglass9y
Exactly. I suspect a disproportionate share of people on LW agree that their eating habits are immoral, but eat the way they do anyway and are willing to indirectly be a part of "torturing puppies behind closed doors." That is, they are more likely to be honest to themselves about what they are doing, but aren't that much more likely to care enough to stop (which is different from being "morally indifferent").

All the work is done in the premises - which is a bad sign rhetorically, but at least a good sign deductively. If I thought cows were close enough to us that there was a 20% chance that hurting a cow was just as bad as hurting a human, I would definitely not want to eat cows.

Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It's still bad - but its badness is some quite smaller number that is a function of my upbringing, cows' cognitive differences from me, and t... (read more)

3shminux9y
I don't even know what 20% means in this context. That 5 cows = 1 person? Not even a rabid vegan would probably claim that.
2Manfred9y
Pretty sure the unpacking goes like "I think it is 20% likely that a moral theory is 'true' (I'm interpreting 'true' as "what I would agree on after perfect information and time to grow and reflect") in which hurting cows is as morally bad as hurting humans."
4shminux9y
Right, sure. but does it not follow that, if you average over all possible worlds, 5 cows have the same moral worth as 1 human?
0MrMind9y
I personally know at least one rabid vegan for whom 1 cow > 1 person.
2jefftk9y
Why ">" and not "="? Is this true for other animals too or are cows special?
1Raemon9y
Tentative guess: Humans are considered to have negative value because (among other things) they kill cows (carbon footprint, etc) Also they might just not be rational.
0Lumifer9y
Kill them all.
1Raemon9y
I've seen it argued.
0Lumifer9y
Notably by Agent Smith from the Matrix. People who argue this can start with themselves.
2Raemon9y
I think there's a pretty solid case for that being a non-optimal solution, even if you've bought all their other premises. (There's not enough of them for a single or even mass suicides to inspire other people to do so, and then they'd just lose the longterm memetic war)
0Lumifer9y
I am quite confident of this result, anyway. Actually, I don't see any chances for a memetic war at all, never mind long-term X-)
0MrMind9y
Well... the example ran away like this: "If there was a fire and I was given the option of saving just the cow or just the person, I would save the cow". Presumably it would be the same with a pig or a dog. This is a trasposed version of the trolley situation: 'I would not actively kill any human, but given the choice, I consider a cow to be more valuable'. The motivating reason was something on the line of "humans are inherently evil, while animals are incapable of evil".
1dthunt9y
Well, how comparable are they, in your view? Like, if you'd kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they're just as effective), then you clearly value cow-suffering to some degree. The reason I went basically vegan is I realized I didn't have enough knowledge to run that calculation, but I was fairly confident that I was ethically okay with eating plants, sludges, and manufactured powders, and most probably the incidental suffering they create, while I learned about those topics. I am basically with you on the notion that hurting a cow is better than hurting a person, and I think horse is the most delicious meat. I just don't eat it any-more. (I'd also personally kill some cows, even in relatively painful ways, in order to save a few people I don't know.)
4Lumifer9y
This triggered a question to bubble up in my brain. How much time of pure wireheading bliss do you need to give to a cow brain in order to feel not guilty about eating steak?
2Azathoth1239y
Given my attitude towards wire-heading generally, that would probably make me feel more guilty.
1dthunt9y
I REALLY like this question, because I don't know how to approach it, and that's where learning happens. So it's definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It's kind of like asking if you'd prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions. I think it'd be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow didn't have a damned awesome life and that we should feel like monsters for allowing it to have existed at all. That having been said, if you give me a choice between cows that have been re-engineered such that their meat is delicious even after they die of natural causes, and humans don't artificially shorten their lives, and they stand around having cowgasms all day - and a world where cows grow without brains - and a world where you grew steaks on bushes - I think I'll pick the bush-world, or the brainless cow world, over the cowgasm one, but I'd almost certainly eat cow meat in all of them. My preference there doesn't have to do with cow-suffering. I suspect it has something to do with my incomplete evolution from one moral philosophy to another. I'm kind of curious how others approach that question.

I think RobbBB does not understand a typical omnivore's (me!) point of view. He also makes irrational conclusions about the ways to reduce the amount of suffering of (potentially somewhat sentient) animals.

Yes, cattle suffer, so do chickens, to a lesser degree. They likely do not suffer in the same way people do. Certainly eggs are not likely to suffer at all. Actually, even different people suffer differently, the blanket moral prohibition against cannibalism is just an obvious Schelling point.

So it would be preferable to not create, raise, slaughter and... (read more)

Certainly eggs are not likely to suffer at all.

It's typically the chickens laying the eggs that people are concerned about. And maybe to a lesser extent the male chickens of the chicken breed used for egg production. (Maybe you're already clear on that, but I have spoken to people who were confused by veganism's prohibition on eating animal products in addition to animals.)

They likely do not suffer in the same way people do.

It doesn't seem safe to assume that their suffering is subjectively less bad than our suffering. Maybe it's worse - maybe the experience of pain and fear is worse when you can only feel it and can't think about it. Either way, I don't see why you'd err on the side of 'It's an uncertain thing so lets keep doing what we're doing and diminish the potential harms when we can' rather than 'It's not that unlikely that we're torturing these things, we should stop in all ways that don't cost us much.'

But yes, creating vat-grown meat and/or pain-free animals should be a priority.

-1dthunt9y
So, there's a heuristic that I think is a decent one, which is that less-conscious things have less potential suffering. I feel that if you had a suffer-o-meter and strapped it to the heads of paramecia, ants, centipedes, birds, mice, and people, they'd probably rank in approximately that order. I have some uncertainty in there, and I could be swayed to a different belief with evidence or an angle I had failed to consider, but I have a hard time imagining what those might be. I think I buy into the notion that most-conscious doesn't strictly mean most-suffering, though - if there were a slightly less conscious, but much more anxious branch of humanoids out there, I think they'd almost certainly be capable of more suffering than humans.

LW folk generally are proponents of Vat-Meat.

On one hand, I agree with you that it's probably not that effective to specifically court the LW demographic. That said, EA-Animal-Rights people are usually in favor of vat-grown meat (there are companies working on it. To my knowledge they are not seeking donations (although Modern Meadow is hiring, if you happen to have relevant skills)

"expose existing cattle/chicken abuse in farms and slaughterhouses" is a mainstay vegan tactic. Robbie's article was prompted by Brienne's article which was specifically arguing against videos that did that (especially if they use additional emotional manipulation tactics)

5geeky9y
Just as a data point: the emotional manipulation tactics (i.e graphic videos) were effective against me. (Mostly because I was unfamiliar with the process before. I didn't know what happened) They tend to be effective in people especially sensitive to graphic images, I think, but I realize that in general it's not a tremendously effective way across the population spectrum. If it was, everyone (or at least everyone who has watched those videos) would probably be vegetarian at this point. This is not the case.
5Lumifer9y
As another data point, emotional manipulations tactics are HIGHLY counterproductive against me. I dislike being emotionally manipulated and when I see attempts to do so my attitude towards the cause worsens considerably.
8IlyaShpitser9y
Can you name three cases when you changed your mind on something important as a result of someone convincing you, by any means.
-1Jiro9y
Are videos intended to produce a visceral reaction against gay sex or abortion also effective against you?
0geeky9y
Those are very different contexts (but the answer is no, they are not effective against me). I don't make decisions based on purely visceral reactions, nor do I advise it. I think there may have been some miscommunication... I was saying that those tactics don't generally work, that I do not recommend them, even if I happened to be an exception.
1shminux9y
"generally proponents" doesn't sound nearly like "putting lots of efforts into". As I said, an effective animal altruist would dedicate some serious time to figuring out better ways to reduce animal suffering. Being boxed in the propaganda-only mode certainly doesn't seem like an effective approach. If you are serious about the issue, go into an Eliezer mode and try to do the impossible. Especially since it's a lot less impossible than what he aspires to achieve.
5Sysice9y
You seem to be saying that people can't talk, think about, or discuss topics unless they're currently devoting their life towards that topic with maximum effectiveness. That seems... incredibly silly. Your statements seem especially odd considering that there are people currently doing all of the things you mentioned (which is why you knew to mention them).
-5shminux9y
7Kaj_Sotala9y
Note that one of Animal Charity Evaluators' two top charities is Mercy for Animals, which has a track record of exposing abuse.
3shminux9y
Yeah, I agree, abuse exposure is actually happening, which is good. At least it reduces the unnecessary torture, if not the amount of slaughter for food.
3solipsist9y
I don't think this gives due respect to the premise. Imagine yourself in a world where attitudes towards meat eating were similar to ours, but the principal form of livestock were human. You'd like to reduce the number of people being raised as meat. Would arguing your ethical position on a site called LessWrong be worth your time, even if most people there weren't very receptive?
2shminux9y
No, what would be worth my time is to figure out how to make less sentient animals taste like humans. Maybe popularize pork, or something.
-2Richard_Kennaway9y
There is textured vegetable protein. Ok, it's not molecule-equivalent to meat, but it's supposed to imitate the physical sensation of eating meat. It was invented fifty years ago. For anyone who wants to eat meat without eating meat, there's an answer. So is there any reason to chase after vat-meat? How close the imitation is, I don't know. I'm not sure I've ever eaten TVP. But it has to be easier and cheaper to improve on the current product than to develop a way of growing bulk tissue in industrial quantities.
1drethelin9y
It is a LOT worse in both taste and texture.

My reason for vegetarianism is, at its core, a very simple one. I'm horrified of violence, almost by default. And I tend to be extremely empathetic. I'm emotionally motivated to treat animals with kindness before I am intellectually motivated. The discrepancy in lw might depend on personality differences. Or sometimes you can get very bogged down in the intellectual minutia trying to sort everything out, and end up reaching a plateau or inaction (i.e, the default).

First, I am not a big fan of having the top-level posts consist of nothing but a link.

Second, the article takes "the intellectual case against meat-eating is pretty air-tight" as its premise. That premise is not even wrong as it confuses values and logic (aka rationality).

Full disclosure: I am a carnivore.

I'm assuming that the LessWrongers interested in 'should I be a vegan?' are at least somewhat inclined toward effective altruism, uilitarianism, compassion, or what-have-you. I'm not claiming a purely selfish agent should be a vegan. I'm also not saying that the case is purely intellectual (in the sense of having nothing to do with our preferences or emotions); I'm just saying that the intellectual component is correctly reasoned. You can evaluate it as a hypothetical imperative without asking whether the antecedent holds.

-2Lumifer9y
I am sorry, where is this coming from? At this level of argument there isn't much intellectual component to speak of. If your value system already says "hurting creatures X is bad", the jump to "don't eat creatures X" doesn't require great intellectual acumen. It's just a direct, first-order consequence.
3Rob Bensinger9y
I didn't say it requires great intellectual acumen. In the blog post we're talking about, I called the argument "air-tight", "very simple", and "almost too clear-cut". I wouldn't have felt the need to explicitly state it at all, were it not for the fact that Eliezer and several other LessWrong people have been having arguments about whether veganism is rational (for a person worried about suffering), and about how confident we can be that non-humans are capable of suffering. Some people were getting the false impression from this that this state of uncertainty about animal cognition was sufficient to justify meat-eating. I'm spelling out the argument only to make it clear that the central points of divergence are normative and/or motivational, not factual.
2RowanE9y
That bit reads to me as just a heading of one section of the article - a paragraph later it lays out the argument which is described as being "pretty air-tight". Which argument does assume one has a particular kind of ethical system, but that's not really the same thing as making the confusion you describe, especially when it's an ethical system shared and trumpeted by many in the community.
2Lumifer9y
Under this logic I can easily say "the intellectual case for killing infidels is pretty air-tight" or "the intellectual case for torturing suspects is pretty air-tight" because hey, we abstracted the values away!
0RowanE9y
Well, yeah, if you have an essay about infidel-killing, having the subheading for the part where you lay out the case for doing so describe said case as "pretty air-tight" isn't exactly a heinous offence. And you're kind of skipping over considerations of what values Less Wrong tends to have. There's a lot of effective altruism material, members of the community are disproportionately consequentialist, are you expecting little asides throughout the article saying "of course, this doesn't apply to the 10% of you who are egoists"?
0Lumifer9y
The question isn't about the offence, the question is whether you would agree with this thesis in the context of an essay about islamic jihad. Neither of these leads to vegetarianism. Consequentialism has nothing to do with it and EA means being rational (=effective) about helping others, but it certainly doesn't tell you how wide the circle of those you should help must be.
0RowanE9y
I accept that neither of the things I listed logically lead to accepting the value claim made in the argument (other than that the effective altruism movement generally assumes one's circle is at least as wide as "all humans", considering the emphasis on charities working a continent away), but I still feel quite confident that LessWrongers are likely, and more likely than the general population, to accept said value claim - unless you want to argue about expected values, the assumption made seems to be "the width of the reader's circle extends to all (meaningfully) sentient beings", which is probably a lot more likely in a community like ours that reads a lot of sci-fi.
1Lumifer9y
Oh, sure, the surveys will tell you so directly. But "more likely than the general population" is pretty far from "doesn't apply to the 10% of you who are egoists".

Aside from painting "LessWrong types" in really broad, unflattering strokes, I thought the author made several good points. Note though that I am a ~15 year vegetarian (and sometime vegan) myself and I definitely identify with his argument, so there's the opportunity for subjective validation to creep in. I also find many perference-utlitarian viewpoints persuasive, though I wouldn't yet identify as one.

I think the 20% thing and the 1-in-20 thing were just hypothetical, so we shouldn't get too hung up on them; I think his case is just as strong w... (read more)

5Jiro9y
Having a small uncertainty about animal suffering and then saying that because of the large number of animals we eat, even a small uncertainty is enough to make eating animals bad, is a variation on Pascal's Mugging.
4Rob Bensinger9y
Yeah, this is why I used the number '1-in-20'. It's somewhat arbitrary, but it serves the function of ruling out Pascal-level uncertainty.
-2DanielFilan9y
I can understand why you shouldn't incentivise someone to possibly torture lots of people by being the sort of person who gives in to Pascal's mugging (in the original formulation). That being said, here you seem to be using Pascal's mugging to refer to doing anything with high expected utility but low probability of success. Why is that irrational?
3Jiro9y
Actually, I'm using it to refer to something which has high expected utility, low probability of success, and a third criterion: you are uncertain about what the probability really is. A sweepstakes with 100 tickets has a 1% chance of winning. A sweepstakes which has 2 tickets but where you think there's a 98% chance that the person running the sweepstakes is a fraudster also has a 1% chance of winning, but that seems fundamentally different from the first case.
1DanielFilan9y
I think this is a misunderstanding of the idea of probability. The real world is either one way or another, either we will actually win the sweepstakes or we won't. Probability comes into the picture in our heads, telling us how likely we think a certain outcome is, and how much we weight it when making decisions. As such, I don't think it makes sense to talk about having uncertainty about what a probability really is, except for the case of a lack of introspection. Also, going back to Robby's post: This seems like an important difference to what you're talking about. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat. If you grant that your probability does in fact obey such a bound, and that that bound suffices for the case for veg*nism, then I think the result follows, whether or not you call it a Pascal's mugging.
1Jiro9y
If you don't like the phrase "uncertainty about the probability", think of it as a probability that is made up of particular kinds of multiple components. The second sweepstakes example has two components, uncertainty about which entry will be picked and uncertainty about whether the manager is honest. The first one only has uncertainty about which entry will be picked. You could split up the first example mathematically (uncertainty about whether your ticket falls in the last two entries and uncertainty about which of the last two entries your ticket is) but the two parts you get are conceptually much closer than in the second example. Like the possibility that the sweepstakes manager is dishonest, "we don't know enough about how cattle cognize" is all or nothing; if you do mulitple trials, the distribution is a lot more lumpy. If all cows had exactly 20% of the capacity of humans, then five cows would have 100% in total. If there's a 20% chance that cows have as much as humans and an 80% chance that they have nothing at all, that's still a 20% chance, but five cows would have a lumpy distribution--instead of five cows having a guaranteed 100%, there would be a 20% chance of having 500% and an 80% chance of nothing. In some sense, each case has a probability bounded by 20% for a single cow. But in the first case, there's no chance of 0%, and in the second case, not only is there a chance of 0%, but the chance of 0% doesn't decrease as you add more cows. The implications of "the probability is bounded by 20%" that you probably want to draw do not follow in the latter case.
0DanielFilan9y
I still don't see why this matters? To put things concretely, if I would be willing to buy the ticket in the first sweepstakes, why wouldn't I be willing to do so in the second? Sure, the uncertainty comes from different sources, but what does this matter for me and how much money I make? If I understand you correctly, you seem to be drawing a slightly distinction here than I thought you were, claiming that the distinction is between 100% probability of a cow consciousness that is 20% as intense as human consciousness, as opposed to a 20% probability of a cow consciousness that is 100% as intense as human consciousness (for some definition of intensity). Am I understanding you correctly? In any case, I still think that the implications that I want to draw do in fact follow. In the latter case, I would think that eating meat has a 20% chance of producing a really horrible effect, and an 80% chance of being mildly convenient for you, so you definitely shouldn't eat meat. Is there something that I am missing? ETA: Again, to put things more concretely, consider theory X: that whenever 50 loaves of bread are bought, someone creates a human, keeps them in horrible conditions, and then kills them. Your probability for theory X being true is 20%. If you remove bread from your diet, you will have to learn a whole bunch of new recipes, and your diet might be slightly low in carbohydrates. Do you think that it is OK to continue eating bread? If not, your disagreement with the case for veg*nism is a different assessment of the facts, rather than a condemnation of the sort of probabilistic reasoning that is used.
1Jiro9y
I imagine the line of reasoning you want me to use to be something like this: "Well, the probability of cow sentience is bounded by 20%, so you shouldn't eat cows." "How do you get to that conclusion? After all, it's not certain. In fact, it's less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever." "Well, you should calculate the expectation. 20% large effect + 80% no effect is still enough of a bad effect to care about." "But I'm never going to get that expectation. I'm either going to get the full effect or nothing at all." "If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you're almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials." If there's a component of uncertainty over the probability, that last step doesn't really work, since many trials are still all or nothing when combined.
1DanielFilan9y
I wouldn't say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you'll probably get a lot of utility that way. That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that's beside the point). Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the "cows are sentient" or "theory X is true" outcomes a probability of 20%, the bad outcomes are so bad that we shouldn't risk them. The fact that you assign probability >50% to no damage happening isn't a suffcient condition to establish "taking the risk is OK".
1Jiro9y
The point is that given the way these probabilities add up, not only wouldn't that work for a single bacon omelette, it wouldn't work for a lifetime of bacon omelettes. They're either all harmful or all non-harmful. Your reasoning doesn't depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal's Mugging. And I don't accept Pascal's Mugging.
0DanielFilan9y
I know. Are you implying that we shouldn't maximise expected utility when we're faced with lots of events with dependent probabilities? This seems like an unusual stance. My reasoning doesn't depend on the exact number 20, but the probability can't be arbitrarily low either. If the probability of cow sentience were only 1/1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome. Again, this depends on what you mean by Pascal's Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you're an average American, approximately 200, only 30 if you don't eat seafood, and only 1.4 if you also don't eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient. However, if by Pascal's Mugging you mean "maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty", then yes, you should accept Pascal's Mugging, and I have never seen a convincing argument that you shouldn't. Also, please don't call that Pascal's Mugging, since it is importantly different from its namesake.
0Jiro9y
I would limit this to cases where the dependency involves trusting an agent's judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision. You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad. It's true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal's Mugging, but it still amounts to picking the right figure to get the right answer.
1DanielFilan9y
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is. The probability that non-human animals suffer can't be arbitrarily large (since it's trivially bounded by 1), and for the purposes of the pro-veganism argument it can't be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I'm not picking your probability that non-human animals suffer, I'm just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I'm right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
0Jiro9y
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.) It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
0DanielFilan9y
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn't eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
0Jiro9y
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals. This is similar to Pascal's Mugging, except that you are choosing the smaller number instead of the larger number. This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.

I wonder how RobbBB, and other vegans, feel about lions on the Serengeti. When they kill gazelles, is that morally wrong? Obviously, they aren't going to be dissuaded by your blog posts, but in a utilitarian framework, I would think that suffering caused by lions' carnivorous tastes is just as "bad" as that caused by humans. Should we put all carnivores in zoos and feed them meat substitutes? Or should lions be free to hunt, regardless of the suffering it may cause the gazelle, because that's their nature?

3jefftk9y
People who approach veganism from utilitarian ideas would group this question in with a bunch of others under wild animal suffering. The general idea is that suffering is just as bad whether human caused or natural, though it's often hard to figure out what actions most reduce suffering (for example, if we killed all the predators there would be lots more prey animals, but if they tend to have lives that are on average worse than not living at all then this would be a bad thing.)
0Lumifer9y
Wouldn't that logic lead you to killing all predators or all prey depending on the answer to the question of whether the prey has lives not worth living? If "no", kill prey, if "yes", kill predators. In any case you're committed to a lot of killing.

This article heavily implies that every LessWronger is a preference utilitarian, and values the wellbeing, happiness, and non-suffering of ever sentient (i.e. non-p-zombie) being. Neither of that is fully true for me, and as this ad-hoc survey - https://www.facebook.com/yudkowsky/posts/10152860272949228 - seems to suggest, I may not be alone in that. Namely, I'm actually pretty much OK with animal suffering. I generally don't empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it's not really the case f... (read more)

3Rob Bensinger9y
I was mainly talking about LessWrongers who care about others (for not-purely-selfish reasons). This is a much milder demand than preference utilitarianism. I'm surprised to hear you don't care about others' well-being -- not even on a system 2 level, setting aside whether you feel swept up in a passionate urge to prevent suffering. Let me see if I can better understand your position by asking a few questions. Assuming no selfish benefits accrued to you, would you sacrifice a small amount of your own happiness to prevent the torture of an atom-by-atom replica of you?
-4maxikov9y
We may be using different definitions of "care". Mine is exactly how much I'm motivated to change something after I became aware that it exists. I don't find myself extremely motivated to eliminate the suffering of humans, and much less for animals. Therefore, I conclude that my priorities are probably different. Also, at least to some extent I'm either hardwired or conditioned to empathize and help humans in my immediate proximity (although definitely to a smaller extent than people who claim to have sleepless nights after observing the footage of suffering), but it doesn't generalize well to the rest of humans and other animals. As for saving the replica, I probably will, since it definitely belongs to the circle of entities I'm likely to empathize with. However, the exact details really depend on whether I classify my replica as myself or as my copy, which I don't have a good answer to. Fortunately, I'm not likely to encounter this dilemma in foreseeable future, and probably by the time it's likely to occur, I'll have more information to answer this question better. Furthermore, especially in this situation, and in much more realistic situations of being nice to people around me, there are almost always selfish benefits, especially in the long run. However, in the situations where every person around me is basically a bully, who perceives niceness as weakness and the invitation to bully more, I frankly don't feel all that much compassion.
1Rob Bensinger9y
Yes, I'm using 'care about X' to mean some combination of 'actually motivated to promote X's welfare' and 'actually motivated to self-modify, if possible, to promote X's welfare'. If I could, I'd take a pill that makes me care enough about non-humans to avoid eating them; so in that sense I care about non-humans, even if my revealed preferences don't match my meta-preferences. Meta-preferences are important because I frequently have conflicting preferences, or preferences I need to cultivate over time if they're to move me, or preferences that serve me well in the short term but poorly in the long term. If I just do whatever I 'care about' in the moment at the object level, unreflectively, without exerting effort to shape my values deliberately, I end up miserable and filled with regret. In contrast, I meta-want my deepest wants to be fairly simple, consistent, and justifiable to other humans. Even if I'm not feeling especially sympathy-laden on a particular day, normative elegance and consistency suggests I should care about the suffering of an exact replica of myself just as much as I care about the suffering inside my own skull. This idea generalizes to endorse prudence for agents that are less similar to me but causally result from me (my future selves) and to endorse concern for agents that will never be me but can have states that resemble mine, including my suffering. I have more epistemic warrant for thinking humans instantiate such states than for thinking non-humans do, but I'm pretty sure that a more informed, in-control-of-his-values version of myself would not consider it similarly essential that moral patients have ten fingers, 23 chromosome pairs, etc. (Certainly I don't endorse decision procedures that would disregard my welfare if I had a different chromosome or finger count, whereas I do endorse procedures that disregard me should I become permanently incapable of experiencing anything.) If I wish I were a nicer and more empathic person, I shoul
-5maxikov9y

I'm going to comment on the general issue, not on the specific link.

I'm a carnivore, so what I'm going to write is my best approximation at purging my reasoning of cached thoughts and motivated cognition.

I'm not convinced that present-day vegetarianism is not just group signalling.
Of course you wouldn't want aware beings to suffer pointlessly. But from there to vegetarianism there's a long road:

  • you should at least try to argue that it's best to never be born than to be born, live a few pleasant years and be killed;
  • that me not eating meat is the best
... (read more)