Abstract: If you value the welfare of nonhuman animals from a consequentialist perspective, there is a lot of potential for reducing suffering by funding the persuasion of people to go vegetarian through either online ads or pamphlets.  In this essay, I develop a calculator for people to come up with their own estimates, and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92 needed to avert a year of suffering in a factory farm.  I then discuss the methodological criticism that merits skepticism of this estimate and conclude by suggesting (1) a guarded approach of putting in just enough money to help the organizations learn and (2) the need for more studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, that include decent control groups.

-

Introduction

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.  I've defended this claim previously in my essay "Why Eat Less Meat?".  I recognize that some people, even those who consider themselves effective altruists, do not value the well-being of nonhuman animals.  For them, I hope this essay is interesting, but I admit it will be a lot less relevant.

The second idea is that it shouldn't matter who is eating less meat.  As long as less meat is being eaten, less animals will be farmed, and this is a good thing.  Therefore, we should try to get other people to also try and eat less meat.

The third idea is that it also doesn't matter who is doing the convincing.  Therefore, instead of convincing our own friends and family, we can pay other people to convince people to eat less meat.  And this is exactly what organizations like Vegan Outreach and The Humane League are doing.  With a certain amount of money, one can hire someone to distribute pamphlets to other people or put advertisements on the internet, and some percentage of people who receive the pamphlets or see the ads will go on to eat less meat.  This idea and the previous one should be uncontroversial for consequentialists.

But the fourth idea is the complication.  I want my philanthropic dollars to go as far as possible, so as to help as much as possible.  Therefore, it becomes very important to try and figure out how much money it takes to get people to eat less meat, so I can compare this to other estimations and see what gets me the best "bang for my buck".


Other Estimations

I have seen other estimates floating around the internet that try to estimate the cost of distributing pamphlets, how many conversions each pamphlet produces, and how much less meat is ate via each conversion.  Brian Tomasik calculates $0.02 to $3.65 [PDF] per year of nonhuman animal suffering prevented, later $2.97 per year, and then later $0.55 to $3.65 per year.

Jess Whittlestone provides statistics that reveal an estimate of less than a penny per year[1]. 

Effective Animal Activism, a non-profit evaluator for animal welfare charities, came up with an estimate [Excel Document] of $0.04 to $16.60 per year of suffering averted, that also takes into account a variety of additional variables, like product elasticity.

Jeff Kaufman uses a different line of reasoning, by estimating how many vegetarians there are and guessing how many of them came via pamphlets, estimates it would take $4.29 to $536 to make someone vegetarian for one year.  Extrapolating from that using at a rate of 255 animals saved per year and a weighted average of 329.6 days lived per animal (see below for justification of both assumptions), would give $0.02 to $1.90 per year of suffering averted[2].

A third line of reasoning, also by Jeff Kaufman, was to measure the amount of comments on the pro-vegetarian websites advertised in these campaigns and found that 2-22% of them were about an intended behavior change (eating less meat, going vegetarian, or going vegan), depending on the website.  I don't think we can draw any conclusions from this, but it's interesting.

To make my calculations, I decided to make a calculator.  Unfortunately, I can't embed it here, so you'd have to open it in a new tab as a companion piece.

I'm going to start by using the following formula: Years of Suffering Averted per Dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal)

Now, to get estimations for these variables.


Pamphlets Per Dollar

How much does it cost to place the advertisement, whether it be the paper pamphlet or a Facebook advertisement?  Nick Cooney, head of the Humane League, says the cost-per-click of Facebook ads is 20 cents.

But what about the cost per pamphlet?  This is more of a guess, but I'm going to go with <a href="">Vegan Outreach's suggested donation of $0.13 per "Compassionate choices" booklet.

However, it's important to note that this cost must also include opportunity cost -- leafleters must forego the ability to use that time to work a job.  This means I must include an opportunity cost of say $8/hr on top of that, making the actual cost $0.27 assuming a pamphlet is given out each minute of volunteer time, meaning 3.7 people are reached per dollar from pamphlets.  For Facebook advertisements, the opportunity cost is trivial.


Conversions Per Pamphlet

This is the estimate with the biggest target on it's head, so to speak.  How many people do we get to actually change their behavior with a simple pamphlet or Facebook advertisement?  Right now, we have three lines of evidence:

Facebook Study

Humane League did A $5000 Facebook advertisement campaign.  They bought ads that look like this...

 

...and sent people to websites (like this one or this one) with auto-playing videos that start playing and show the horrors of factory farming.

Afterward, there was another advertisement run to people who "liked" the video page, offering a 1 in 10 chance of winning a free movie ticket in order to take a survey.  Everyone who emailed in asking for a free vegetarian starter kit were also emailed a survey.  104 people took the survey and there were 32 reported vegetarians[3] and 45 people reported, for example, that their chicken consumption decreased "slightly" or "significantly".

7% of visitors liked the page and 1.5% of visitors ordered a starter kit.  Assuming all the other people went away from the video not changing their consumption, this survey would lead us to (very tenuously) think about 2.6% of people seeing the video will become a vegetarian[4].

(Here's the results of the survey in PDF.)

Pamphlet Study

A second study discussed in "The Powerful Impact of College Leafleting (Part 1)" and "The Powerful Impact of College Leafleting: Additional Findings and Details (Part 2)" looked specifically at pamphlets.

Here, Humane League staff visited two large East Coast state schools and distributed leaflets.  They then returned two months later and surveyed people walking by.  Those who remember receiving a leaflet earlier were counted.  They found about 2% of those receiving a pamphlet went vegetarian.

Vegetarian Years Per Conversion

But once a pamphlet or Facebook advertisement captures someone, how long will they stay vegetarian?  One survey showed vegetarians refrain from eating meat for an average of 6 years or more.  Another study I found says 93% of vegetarians stay vegetarian for at least three years.

 

Animals Saved Per Vegetarian Year

And once you have a vegetarian, how many animals do they save per year?  CountingAnimals says 406 animals saved per year.

The Humane League suggests 28 chickens, 2 egg industry hens, 1/8 beef cow, 1/2 pig, 1 turkey, and 1/30 dairy cow per year (total = 31.66 animals), and does not provide statistics on fish.  This agrees with CountingAnimals on non-fish totals.

Days Lived Per Animal

One problem, however, is that saving a cow that could suffer for years is different from saving a chicken that suffers for only about a month.  Using data from Farm Sanctuary plus World Society for the Protection of Animals data on fish [PDF], I get this table:

Animal Number Days Alive
Chicken (Meat) 28 42
Chicken (Egg) 2 365
Cow (Beef) 0.125 365
Cow (Milk) 0.033 1460
Fish 225 365

This makes the weighted average 329.6 days[5].

 

Accounting For Biases

As I said before, our formula was Years of Suffering Averted = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal).

Let's plug these values in... Years of Suffering Averted per Dollar = 5 * 0.02 * 3 * 255.16 * 329.6/365 = 69.12.

Or, assuming all this is right (and that's a big assumption), it would cost less than 2 cents to prevent a year of suffering on a factory farm by buying vegetarians.

I don't want to make it sound like I'm beholden to this cost estimate or that this estimate is the "end all, be all" of vegan outreach.  Indeed, I share many of the skepticisms that have been expressed by others.  The simple calculation is... well... simple, and it needs some "beefing up", no pun intended.  Therefore, I also built a "complex calculator" that works on a much more complex formula[6] that is hopefully correct[7] and will provide a more accurate estimation.

 

The big, big deal for the surveys is concern for bias.  The most frequently mentioned bias is social desirability bias, or people who say they reduced meat just because they want to please the surveyor or look like a good person, which actually happens a lot more on surveys than we'd like.

To account for this, we'll have to figure out how inflated answers are because of this bias and then scale the answers down by that amount.  Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations.  Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias.

 

The second bias that will be a problem for us is non-response bias, as those who don't reduce their diet are less likely to take the survey and therefore less likely to be counted.  This is especially true in the Facebook study, which only measures people who "liked" or requested a starter kit, showing some pro-vegetarian affiliation.

We can balance this out by assuming everyone who didn't take the survey went on to have no behavior change whatsoever.  Nick Cooney's Facebook Ad Survey is for the 7% of people who liked the page (and then responded to the survey), and obviously those who liked the page are more likely to reduce their consumption.  I chose an optimistic value of 90% to consider the survey completely representative of the 7% who liked the page, and then a bit more for those who reduced their consumption but did not like the page.  My pessimistic value was 95%, assuming everyone who did not like the survey went unchanged and assuming a small response bias among those who liked the page but chose not to take the survey.

For the pamphlets, however, there should be no response bias since the entire population of college students was surveyed from randomly, and no one was said to reject taking the survey.

 

Additional People Are Being Reached

In the Facebook survey, those who said they reduced their meat consumption were also asked if they influenced any of their friends and family to also reduce eating meat, and found that they usually produced 0.86 additional reducers.

This figure seems very high, but I do strongly expect the figure to be positive -- people who reduce eating meat will talk about it sometimes, essentially becoming free advertisements.  I'd be very surprised if they ended up being a net negative.

 

Accounting for Product Elasticity

Another way to boost the effectiveness of the estimate is to be more accurate about what happens when someone stops eating meat.  The change isn't from the actual refusal to eat, but rather from the reduced demand for meat, which leads to a reduced supply.  Following the laws of economics, however, this reduction won't necessarially be one-for-one, but rather depend on the elasticity of product demand and supply.  By getting this number, we can find out how much meat is reduced for every meat not demanded.

My guesses in the calculator come from the following sources, some of which are PDFs: Beef #1Beef #2Dairy #1Dairy #2Pork #1, Pork #2Egg #1, Egg #2PoultrySalmon, and for all fish.

 

Putting It All Together

Implementing the formula on the calculator, we end up with an estimate of $0.03 to $36.52 to reduce one year of suffering on a factory farm based on the Facebook ad data and an estimate of $0.02 to $65.92 based on the pamphlet data.

Of course, many people are skeptical of these figures.  Perhaps surprisingly, so am I.  I'm trying to strike a balance between being an advocate of vegan outreach as a very promising path for making the world a better place, while not losing sight of the methodological hurdles that have not yet been met, and open to the possibility that I'm wrong about this.

The big methodological elephant in the room is that my entire cost estimate depends on having a plausible guess for how likely someone is to change their behavior based on seeing an advertisement.

I feel slightly reassured because:

  1. There are two surveys for two different media, and they both provide estimates of impact that agree with each other.
  2. These estimates also match anecdotes from leafleters about approximately how many people come back and say they went vegetarian because of a pamphlet.
  3. Even if we were to take the simple calculator and drop the "2% chance of getting four years of vegetarianism" assumption down to, say, a pessimistic "0.1% chance of getting one year" conversion rate, the estimate is still not too bad -- $0.91 to avert a year of suffering.
  4. More studies are on the way.  Nick Cooney is going to do a bunch more to study leaflets, and Xio Kikauka and Joey Savoie have publicly published some survey methodology [Google Docs].

That said, the possibility for desirability bias in the survey is a large concern as long as the surveys continue to be from overt animal welfare groups and continue to clearly state that they're looking for reductions in meat consumption.

Also, so long as surveys are only given to people that remember the leaflet or advertisement, there will be a strong possibility of response bias, as those who remember the ad are more likely to be the ones who changed their behavior.  We can attempt to compensate for these things, but we can only do so much.

Furthermore, and more worrying, there's a concern that the surveys are just measuring normal drift in vegetarianism, without any changes being attributable to the ads themselves.  For example, imagine that every year, 2% of people become vegetarians and 2% quit.  Surveying these people at random and not capturing those who quit will end up finding a 2% conversion rate.

How can we address these?  I think all three problems can be solved with a decent control group, whether it be a group of people that receive a leaflet not about vegetarianism, or no leaflet at all.  Luckily, Kikauka and Savoie's survey intend to do just that.

Jeff Kaufman has a good proposal for a survey design I'd like to see implemented in this area.

 

Market Saturation and Diminishing Marginal Returns?

Another concern is that there are diminishing marginal returns to these ads.  As the critique goes, there are only so many people that will be easily swayed by the advertisement, and once all of them are quickly reached by Facebook ads and pamphlets, things will dry up.

Unlike the others, I don't think this criticism works well.  After all, even if it were true, it still would be worthwhile to take the market as far as it will go, and we can keep monitoring for saturation and find the point where it's no longer cost-effective.

However, I don't think the market has been tapped up yet at all.  According to Nick Cooney [PDF], there are still many opportunities in foreign markets and outside the young, college kid demographic.

 

The Conjunction Fallacy?

The conjunction fallacy is a classic fallacy that reminds us that no matter what, the chance of event A happening can never be smaller than the chance of event A happening, followed by event B.  For example, the probability that Linda is a bank teller will always be larger than (or equal to) the probability that Linda is a bank teller and a feminist.

What does this mean for vegetarian outreach?  Well, for the simple calculator, we're estimating five factors.  In the complex calculator, we're estimating 90 factors.  Even if each factor is 99% likely to be correct, the chance that all five are right is 95%, and the chance that all 50 are right is only 60%.  If each factor is only 90% likely to be correct, the complex calculator will be right with a probability of 0.5%!

This is a cause for concern, but I don't think there's any way around this.  It's just an inherent problem with estimation.  Hopefully we'll be balanced by (1) using the different bounds and (2) hoping underestimates and overestimates will cancel each other out.

 

Conversion and The 100 Yard Line

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.  As Brian Tomasik puts it:

Yes, some of the people we convince were already on the border, but there might be lots of other people who get pushed further along and don’t get all the way to vegism by our influence. If we picture the path to vegism as a 100-yard line, then maybe we push everyone along by 20 yards. 1/5 of people cross the line, and this is what we see, but the other 4/5 get pushed closer too. (Obviously an overly simplistic model, but it illustrates the idea.)

This would be either very difficult or outright impossible to capture in a survey, but is something to take into account.

 

Three Places I Might Donate Before Donating to Vegan Outreach

When all is said and done, I like the case for funding this outreach.  However, I think there are three other possibilities along these lines that I find more promising:

Funding the research of vegan outreach: There needs to be more and higher-quality studies of this before one can feel confident enough in the cost-effectiveness of this outreach.  However, initial results are very promising, and the value of information of more studies is therefore very high.  Studies can also find ways to advertise more effectively, increasing the impact of each dollar spent.  Right now, however, it looks like all ongoing studies are fully funded, but if there were opportunities to fund more, I would jump on it.

Funding Effective Animal Activism: EAA is an organization pushing for more cost-effectiveness in the domain of nonhuman animal welfare and is working to further evaluate what opportunities are the best, Givewell-style.  Giving them more money can potentially attract a lot more attention to this outreach, and get it more scrutiny, research, and money down the line.

Funding Centre for Effective Altruism: Overall, it might just be better to get more people involved in the idea of giving effectively, and then getting them interested in vegan outreach, among other things.

 

Conclusion

Vegan outreach is a promising, though not fully studied, method of outreach that deserves both excitement and skepticism.  Should one put money into it?  Overall, I'd take a guarded approach of putting in just enough money to help the organizations learn, develop better cost-effective measurements and transparency, and become more effective.  It shouldn't be too long before this area will become studied well enough to have good confidence in how things are doing.

More studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, with decent control groups.

I look forward to seeing how this develops.  Don't forget to play around with my calculator.

-

 

Footnotes

[1]: Cost effectiveness in years of suffering prevented per dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Years lived / animal).

Plugging in 80K's values... Cost effectiveness = (Pamphlets / dollar) * 0.01 to 0.03 * 25 * 100 * (Years lived / animal)

Filling in the gaps with my best guesses... Cost effectiveness = 5 * 0.01 to 0.03 * 25 * 100 * 0.90 = 112.5 to 337.5 years of suffering averted per dollar
I personally think 25 veg-years per conversion on average is possible but too high; I personally err from 4 to 7.
[2]: I feel like there's an error in this calculation or that Kaufman might disagree with my assumptions of number of animals or days per animal, because I've been told before that these estimates with this method are supposed to be about an order of magnitude higher than other estimates.  However, I emailed Kaufman and he seemed to not find any fault with the calculation, though he does think the methodology is bad and the calculation should not be taken at face value.
[3]: I calculated the number of vegetarians by eyeballing about how many people said they no longer eat fish, which I'd guess only a vegetarian would be willing to give up.
[4]: 32 vegetarians / 104 people = 30.7%.  That population is 8.5% (7% for likes + 1.5% for the starter kit) of the overall population, leading to 2.61% (30.7% * 8.5%).
[5]: Formula is [(Number Meat Chickens)(Days Alive) + (Number Egg Chickens)(Days Alive) + (Number Beef Cows)(Days Alive) + (Number Milk Cows)(Days Alive) + (Number Fish)(Days Alive)] / (Total Number Animals).  ...Plugging things in: [(28)(42) + (2)(365) + (0.125)(365) + (0.033)(1460) + (225)(365)] / 255.16] = 329.6 days

[6]:
Cost effectiveness in amount of days prevented per dollar = (People Reached / Dollar + (People Reached / Dollar * Additional People Reached / Direct Reach * Response Bias * Desirability Bias)) * Years Spent Reducing * (((Percent Increasing Beef * Increase Value) + (Percent Staying Same with Beef * Staying Same Value) + (Percent Decreasing Beef Slightly * Decrease Slightly Value) + (Percent Decreasing Beef Significantly * Decrease Significantly Value) + (Percent Eliminating Beef * Elimination Value) + (Percent Never Ate Beef * Never Ate Value)) * Normal Beef Consumption * Beef Elasticity * (Average Beef Lifespan + Days of Suffering from Beef Slaughter)) + (((Percent Increasing Dairy * Increase Value) + (Percent Staying Same with Dairy * Staying Same Value) + (Percent Decreasing Dairy Slightly * Decrease Slightly Value) + (Percent Decreasing Dairy Significantly * Decrease Significantly Value) + (Percent Eliminating Dairy * Elimination Value) + (Percent Never Ate Dairy * Never Ate Value)) * Normal Dairy Consumption * Dairy Elasticity * (Average Dairy Lifespan + Days of Suffering from Dairy Slaughter)) + (((Percent Increasing Pig * Increase Value) + (Percent Staying Same with Pig * Staying Same Value) + (Percent Decreasing Pig Slightly * Decrease Slightly Value) + (Percent Decreasing Pig Significantly * Decrease Significantly Value) + (Percent Eliminating Pig * Elimination Value) + (Percent Never Ate Pig * Never Ate Value)) * Normal Pig Consumption * Pig Elasticity * (Average Pig Lifespan + Days of Suffering from Pig Slaughter)) + (((Percent Increasing Broiler Chicken * Increase Value) + (Percent Staying Same with Broiler Chicken * Staying Same Value) + (Percent Decreasing Broiler Chicken Slightly * Decrease Slightly Value) + (Percent Decreasing Broiler Chicken Significantly * Decrease Significantly Value) + (Percent Eliminating Broiler Chicken * Elimination Value) + (Percent Never Ate Broiler Chicken * Never Ate Value)) * Normal Broiler Chicken Consumption * Broiler Chicken Elasticity * (Average Broiler Chicken Lifespan + Days of Suffering from Broiler Chicken Slaughter)) + (((Percent Increasing Egg * Increase Value) + (Percent Staying Same with Egg * Staying Same Value) + (Percent Decreasing Egg Slightly * Decrease Slightly Value) + (Percent Decreasing Egg Significantly * Decrease Significantly Value) + (Percent Eliminating Egg * Elimination Value) + (Percent Never Ate Egg * Never Ate Value)) * Normal Egg Consumption * Egg Elasticity * (Average Egg Lifespan + Days of Suffering from Egg Slaughter)) + (((Percent Increasing Turkey * Increase Value) + (Percent Staying Same with Turkey * Staying Same Value) + (Percent Decreasing Turkey Slightly * Decrease Slightly Value) + (Percent Decreasing Turkey Significantly * Decrease Significantly Value) + (Percent Eliminating Turkey * Elimination Value) + (Percent Never Ate Turkey * Never Ate Value)) * Normal Turkey Consumption * Turkey Elasticity * (Average Turkey Lifespan + Days of Suffering from Turkey Slaughter)) + (((Percent Increasing Farmed Fish * Increase Value) + (Percent Staying Same with Farmed Fish * Staying Same Value) + (Percent Decreasing Farmed Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Farmed Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Farmed Fish * Elimination Value) + (Percent Never Ate Farmed Fish * Never Ate Value)) * Normal Farmed Fish Consumption * Farmed Fish Elasticity * (Average Farmed Fish Lifespan + Days of Suffering from Farmed Fish Slaughter)) + (((Percent Increasing Sea Fish * Increase Value) + (Percent Staying Same with Sea Fish * Staying Same Value) + (Percent Decreasing Sea Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Sea Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Sea Fish * Elimination Value) + (Percent Never Ate Sea Fish * Never Ate Value)) * Normal Sea Fish Consumption * Sea Fish Elasticity * Days of Suffering from Sea Fish Slaughter) * Response Bias * Desirability Bias
[7]: Feel free to check the formula for accuracy and also check to make sure the calculator implements the formula correctly.  I worry that the added accuracy from the complex calculator is outweighed by the risk that the formula is wrong.

-

Edited 18 June to correct two typos and update footnote #2.

Also cross-posted on my blog.

New to LessWrong?

New Comment
553 comments, sorted by Click to highlight new comments since: Today at 10:21 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations. Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias

This doesn't follow. The intervention is increasing the desirability bias, so the portion of purported vegetarians who are actually vegetarian is likely to change, in the direction of a lower proportion of true vegetarianism. It's plausible that 90%+ of the marginal purported vegetarians are bogus. Consider ethics and philosophy professors, who are significantly more likely to profess that eating meat is wrong:

There is no statistically detectable difference between the ethicists and either group of non-ethicists. (The difference between non-ethicists philosophers and the comparison professors was significant to marginal, depending on the test.)

Conclusion? Ethicists condemn meat-eating more than the other groups, but actually eat meat at about the same rate. Perhaps also, they're more likely to misrepresent their meat-eating practices (on the meals-per-week question and

... (read more)
8Peter Wildeford11y
This is actually a really good point that makes me less confident in the effectiveness of vegetarianism advocacy.
0[anonymous]11y
An additional point: Cattle have a bit less than 1/3rd the brain mass of humans, chickens about 1/40th, and fish are down more than an order of magnitude (moreso by cortex). If you weight expected value by neurons, which is made plausible by thinking about things like split-brain patients and local computations in nervous systems, that will drastically change the picture. My quick back-of-the envelope (which didn't take into account the small average size of the mostly feed fish involved, and thus reduced neural tissue) is that making this adjustment would cut the cost-effectiveness metric by a factor of at least 400 times, and plausibly 1000+ times. This reflects the fact that fish make up most of the life-days in the calculation, and also have comparatively tiny and simple nervous systems. Personally, I would pay more to ensure a painless death for a cow than for a small feed fish with orders of magnitude less neural capacity.
-4Qiaochu_Yuan11y
Ah, but now I can turn myself into a utility monster by artificially enlarging my brain! Game over.
4Paul Crowley11y
We're trying to work out how to make progress on moral questions today, not trying to lay down a rule for all eternity that future agents can't game.
2Qiaochu_Yuan11y
It was a joke.
2Paul Crowley11y
Oops, sorry!
2CarlShulman11y
Or by having kids. Or copying your uploaded self. Or re-engineering your nervous system in other ways...
0CarlShulman11y
The bit about desirability bias, or the fact that the optimistic estimates involve claiming that vegetarian ads are vastly more effective than other kinds of moralized behavior-change ads with more accurate measurements of effect?
3Peter Wildeford11y
Both points. The question "why should vegetarianism advocacy be so much more effective than get out the vote advocacy?" is a good point. Since the study quality for get out the vote advocacy is so much higher, we should expect vegetarianism advocacy to end up about the same. On the other hand, I do think vegetarianism advocacy is a lot more psychologically salient (pictures of suffering) than any case that can be made for voting. I've personally distributed some pro-voting pamphlets, and they're not very compelling at all.
4Brian_Tomasik11y
Good points, Carl! Jonah Sinick actually made the GOTV argument to me on a prior occasion, citing your essay on the topic. One additional consideration is that nearly everyone knows about voting, but many people don't know about the cruelty of factory farms. This goes along with the low-hanging-fruit point. I would not be surprised if, after tempering the figures by this outside-view prior, it takes a few hundred dollars to create a new veg year. Even if so, that's at most 1-2 orders of magnitude different from the naive conservative estimate.
6Peter Wildeford11y
This is something I've considered a lot, though chicken also dominate the calculations along with fish. I'm not currently sure if I value welfare in proportion to neuron count, though I might. I'd have to sort that out first. A question at this point I might ask is how good does the final estimate have to be? If AMF can add about 30 years of healthy human life for $2000 by averting malaria and a human is worth 40x that of a chicken, then we'd need to pay less than $1.67 to avert a year of suffering for a chicken (assuming averting a year of suffering is the same as adding a year of healthy life, which is a messy assumption).

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

3CarlShulman11y
I agree on that effect, I left out various complications. A flip side to that would be the number of cortex neurons (and equivalents). These decrease rapidly in simpler nervous systems. We don't object nearly as much to our own pains that we are not conscious of and don't notice or know about, so weighting by consciousness of pain, rather than pain/nociception itself, is a possibility ( I think that Brian Tomasik is into this).

A question at this point I might ask is how good does the final estimate have to be?

First, there are multiple applications of accurate estimates.

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ."

There are some people going around making the claim, based on the extreme low-ball cost estimates, that these veg ads would save human lives more cheaply than AMF by reducing food prices. With saner estimates, not so, I think.

Second, there's the question of flow-through effects, which presumably dominate in a total utilitarian calculation anyway, if that's what you're into. The animal experiences probably don't have much effect there, but people being vegetarian might have some, as could effects on human health, pollution, food prices, social movements, etc.

To address the total utilitarian question would require a different sort of evidence, at least in the realistic ranges.

4Louie11y
Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.
4KatieHartman11y
This might be a minor point, but I don't think it's necessarily a given that one year of healthy, average-quality life offsets one year of factory farm-style confinement. If we were only discussing humans, I don't think anyone would consider a year under those conditions to be offset by a healthy year.

You could also reduce meat consumption by advertising good vegetarian meal recipes.

(Generally, the idea is that you can reduce eating meat even without explicitly promoting not eating meat.)

5Peter Wildeford11y
Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat? This is already a strong component in existing advocacy, though none of it mentions recipes alone. Leading pamphlets like "Compassionate Choices" and "Even if You Like Meat" have recipe sections at the end of the book. Peter Singer's book Animal Liberation has recipes. Vegan Outreach has a starter guide section with lots of recipes. As far as I know, the videos used on the internet don't directly mention recipes, but do point to ChooseVeg.com which has tons of recipes and essentially advertises vegetarianism via a recipe-based argument. Another recent campaign, The Seven Day Vegan Challenge also advertises based on a lot of recipes.

Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat?

I agree with Viliam_Bur that this may be effective, and here's why.

I bake as a hobby (desserts — cakes, pies, etc.). I am not a vegetarian; I find moral arguments for vegetarianism utterly unconvincing and am not interesting in reducing the suffering of animals and so forth.

However, I often like to try new recipes, to expand my repertoire, hone my baking skills, try new things, etc. Sometimes I try out vegan dessert recipes, for the novelty and the challenge of making something that is delicious without containing eggs or dairy or white sugar or any of the usual things that go into making desserts taste good.[1]

More, and more readily available, high-quality vegan dessert recipes would mean that I substitute more vegan dessert dishes for non-vegan ones. This effect would be quite negated if the recipes came bundled with admonitions to become vegan, pro-vegan propaganda, comments about how many animals this recipe saves, etc.; I don't want to be preached to, which I think is a common attitude.

[1] My other (less salient) motivation for learning to make vegan baked goods is to be prepared if I ever have vegan/vegetarian friends who can't eat my usual stuff (hasn't ever been the case so far, but it could happen).

9Viliam_Bur11y
Thanks, this is what I tried to say. Reducing suffering is far, eating well is near. Also, if a book or a website comes with vegetarian/vegan propaganda, I would assume those people are likely to lie or exaggerate. No propaganda -- no suspicion. This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations, so I often find their food unappealing. They act like an anti-advertisement to vegetarian food. (Perhaps there is an unconscious status motive here: the less people join them, the more noble they are. Which is not how an effective altruist should think.) On the other hand, when I go to some Indian or similar ethnic restaurant, I love the food. It tastes well, it has different components and good spice. I mean, what's wrong about using spice? If your goal is to reduce animal suffering, nothing. But if your goal is to have a weirdest diet possible (no meat, no cooking, no taste, everything compatible with the latest popular book or your horoscope), spice is usually on the list of forbidden components. In short, vegetarianism is often not about not eating animals. So if you focus on "good meal (without meat)" part, and ignore the vegetarianism, you may win people like me. Even if I don't promise to give up meat completely, I can reduce its consumption simply because tasty meals without meat outcompete tasty meals with meat on my table.
1amcknight11y
I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).
4GordonAitchJay11y
What were the moral arguments for vegetarianism that you found utterly unconvincing? Where did you hear or read these? Are you interested in reducing the suffering of humans? If so, why?
-1Said Achmiz11y
The ones that say we should care about what happens to animals and what animals experience, including arguments from suffering. I've heard them in lots of places; the OP has himself posted an example — his own essay "Why Eat Less Meat?" Yeah. I think if you unpacked this aspect of my values, you'd find something like "sapient / self-aware beings matter" or "conscious minds that are able to think and reason matter". That's more or less how I think about it, though converting that into something rigorous is nontrivial. "Matter" here is used in a broad sense; I care about sapient beings, think that their suffering is wrong, and also consider such beings the appropriate reference class for "veil of ignorance" type arguments, which I find relevant and at least partly convincing. My caring about reducing human suffering has limits (in more than one dimension). It is not necessarily my highest value, and interacts with my other values in various ways, although I mostly use consequentialism in my moral reasoning and so those interactions are reasonably straightforward for the most part.
0freeze9y
Do you think that animals can suffer? Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?
2Swimmer963 (Miranda Dixon-Luinenburg) 11y
White sugar has animal products in it?
1Said Achmiz11y
Not as such, no, but animal products are used in its manufacture: bone char is used in the sugar refining process (by some manufacturers, though not all), making it not ok for vegans.
2Swimmer963 (Miranda Dixon-Luinenburg) 11y
Wow. I learned something that I did not know before :)
0A1987dM11y
I had heard that plenty of times, but I had never bothered to check whether or not that was just an urban legend.
0Douglas_Knight11y
Have you experimented with baking with lard?
0Said Achmiz11y
I have not. Christopher Kimball, in The Dessert Bible, comments that unless you can get leaf lard (the highest grade of lard, which comes from the fat around the pig's kidneys), using lard in dessert recipes is undesirable (results in the dough having a bacon-y taste). I don't think I can get leaf lard here in NYC, and even if I could it would probably be very expensive.
0Douglas_Knight11y
NYC? of course you can. Or mail-order. But I would start with regular lard in the right recipes. On a different note, I usually substitute brown sugar for white for the taste.
0Said Achmiz11y
Oh? Do you know any good places to get it in NYC? (Preferably Brooklyn, Manhattan also fine.) Yes, brown for white sugar is a good substitution sometimes. However it can partially mute the taste of other ingredients, like fresh fruit, so it's not always ideal. Also, brown sugar is definitely more expensive.
0novalis11y
I would be shocked if Ottomanelli's on Bleeker didn't have it leaf lard.
0Said Achmiz11y
The internet tells me they don't carry it, but can special-order it. Mail-order, by the way, looks to come out to $10 / lb., at least., if you can get it; very few places seem to carry it.
0novalis11y
You might have to call them; they will special-order just about anything. The only thing I have failed to find there was rabbit ears (without buying the whole rabbit).
8NoSignalNoNoise11y
Many non-vegetarians are suspicious of organizations that try to convince them to be vegetarian. It might be more effective to promote vegetarian recipes separately from "don't eat meat" efforts. Incidentally, I would love to know of more (not too difficult) ways to cook tofu.

I like to take the firmest tofu I can find (this is usually vacuum-packed, not water-packed) and cut it into slices or little cubes, and then pan-fry it in olive oil with a splash of lemon juice added halfway through till it's golden-brown and chewy. Then I put it in pasta (cubes) or on sandwiches (slices) - the sandwich kind is especially nice with spinach sauteed with cheese and hummus.

4Raemon11y
I think that simply promoting good vegetarian meals would potentially reduce meat consumption among certain groups of people that would be less receptive to accompanying pro-vegetarian arguments. I think it should be part of a vegan-advocacy arsenal (i.e. you do a bunch of different sorts of flyers/ads/promotions, some of which is just recipe spreading without any further context) However, if one of your goals is to increase human compassion for nonhumans, then recipe spreading is dramatically less useful in the long term. One of the biggest arguments (among LW folk anyway) for animal advocacy is that not only are factory farms (and the wilderness) pretty awful, but that it'll hopefully translate into more humanely managed eco-systems, once we go off terraforming or creating virtual worlds. (It may turn out to be effective to get people to try out vegan recipes [without accompanying pro-vegan context] and then later on promote actual vegan ideals to the same people, after they've already taken small steps that indirectly bias themselves towards identifying with veganism)
0freeze9y
Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.

I'm really curious why all of the major animal welfare/rights organizations seem to be putting more emphasis on vegan outreach than on in-vitro meat/genetic modification research. I have a hard time imagining a scenario where any arbitrary (but large) contribution toward vegan outreach leads to greater suffering reduction than the same amount put toward hastening a more efficient and cruelty-free system for producing meat.

There seems to be, based just on my non-rigorous observations, significant overlap between the Vegan/Vegetarian communities and the "Genetically Modified Foods and big Pharma will turn your babies into money-forging cancer" theorists. Obviously not all Vegans are "chemicals=bad because nature" conspiracy theorists, and not all such conspiracy theorists are vegan, but the overlap seems significant. That vocal overlap group strikes me as likely to oppose lab-grown meat because it's unnatural, and then the conspiracy theories will begin. And the animal rights groups probably don't want to divide up their base any further.

(This comment felt harsh to me as I was writing it, even after I cut out other bits. The feeling I'm getting is very similar to political indignation. If this looks as mind-killd to anyone else, please please correct me.)

5KatieHartman11y
That seems plausible, though PETA already has a million-dollar prize for anyone who can mass-market an in-vitro meat product. Given their annual revenues (~$30 million) and the cost associated with that kind of project, it seems like they're going about it the wrong way. From a utilitarian perspective, wireheading livestock might be an even better option - though that probably would be perceived by most animal activists (and people in general) as vaguely dystopian.
3[anonymous]11y
Does the technology to reliably and cheaply wirehead farmed animals now exist at all? Without claiming expertise, I find that unlikely.
6johnlawrenceaspden11y
Opium in the feed? Cut their nerves? Some sort of computerised gamma-ray brain surgery? I'm certain that if there were a tiny financial incentive for agribusiness to do it then a way would swiftly be found. It's not so hard to turn humans into living vegetables. Some sorts of head trauma seem to do it. How hard can it be to make that reliable (or at least reasonably reliable) for cows? Least convenient world and all that: If we could prevent animal suffering by skilfully whacking calves over the head with a claw hammer, would that be a goal to which the rational vegan would aspire? It would be just as good as killing them, plus pleasure for the meat eaters. Also it would probably be possible to find people who'd enjoy doing it, so that's another plus.
6Nornagest11y
Probably not that hard. Doing it without ruining the meat or at least reducing yields sounds harder to me, though -- muscles atrophy if they don't get used, and they don't get used if nothing's giving them commands. I'd also expect force-feeding a braindead animal to be more expensive and probably more conducive to health problems than letting it feed itself.
8gwern11y
To continue the 'living vegetables' approach, one could point out that to keep a human in a coma alive and (somewhat) well will cost you somewhere from $500-$3k+. Per day. Even assuming that animals are much cheaper by taking the bottom of the range and then cutting it by an entire order of magnitude, the 1.5-3 year aging of standard cattles being butchered means 50 1.5 365 = >$27.4k extra expenses. That's some expensive meat.
0Jabberslythe11y
So just kill all the farm animals painlessly now? Sure that sounds good. But if there will still be farm animal being raised then it seems there still is a problem. Or if you are just talking about ways of making slaughter painless for continuing to factory farm, that sounds better than nothing.
2ialdabaoth11y
I find this interesting, because it seems to imply that people have an intuitive sense that eudaimonia applies to animals. I'll have to think about the consequences of this.
0freeze8y
Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs. Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.
3Peter Wildeford11y
A lot of animal welfare/rights organizations provide funding for in-vitro meat / fake meat, though they don't do much to advertise it. The idea is that these meat substitutes won't take off unless they create some demand for them. Vegan Outreach is one of the biggest funders of Beyond Meat and New Harvest.

I like Beyond Meat, but I think the praise for it has been overblown. For example, the Effective Animal Activism link you've provided says:

[Beyond Meat] mimics chicken to such a degree that renowned New York Times food journalist and author Mark Bittman claimed that it "fooled me badly in a blind tasting".

But reading Bittman's piece, the reader will quickly realize that the quote above is taken out of context:

It doesn’t taste much like chicken, but since most white meat chicken doesn’t taste like much anyway, that’s hardly a problem; both are about texture, chew and the ingredients you put on them or combine with them. When you take Brown’s product, cut it up and combine it with, say, chopped tomato and lettuce and mayonnaise with some seasoning in it, and wrap it in a burrito, you won’t know the difference between that and chicken.

I like soy meat alternatives just fine, but vegans and vegetarians are the market. People who enjoy the taste of meat and don't see the ethical problems with it don't want a relatively expensive alternative with a flavor they have to mask. There's demand for in-vitro meat because there's demand for meat. If you can make a product that t... (read more)

9wedrifid11y
It seems overwhelmingly unlikely that the optimal method of meat production is to have it walking around eating plant matter and going 'Moo!'.

Especially for sheep. The training costs would be prohibitive.

0A1987dM11y
I dunno -- look at all the brouhaha about genetically modified food.
0TheOtherDave11y
That there's a population brouhahaing over GM food doesn't preclude the existence of a population eager to buy cheap tasty-enough meat. Indeed, I expect the populations overlap significantly.
0Osiris11y
I predict a big drop in price soon after vat meat becomes sufficiently popular due to money saved on dealing with useless organs and suffering, as well as a great big leap in profit for any farm that sells "natural cow meat." One is inherently efficient due to it simplfying farming. The other is pretty, however ugly it is for the animals. I do worry about the numbers New Harvest gives, but in the long run, there is hope for this regardless of what the price is initially--the potential for success in feeding humanity cheaply and well is just too great, in my opinion. Seems like I will be pushing "meat in a bucket" whenever possible, and I am not even that into making animals happy.
2Jabberslythe11y
Well if vegan/vegetarian outreach is particularly effective then it may do more to develope lab meat than just donating to lab meat causes themselves (because there would be more people interested in this and similar technologies). Additionally, making people vegan/vegetarian may have a stronger effect in promoting anti speciesism in general which seems like it will be of larger overall benefit than just ending factory farming. This seems like it would happen because thoughts follow actions.
1hylleddin11y
I've wondered about this as well. We can try to estimate New Harvest's effectiveness using the same methodology attempted for SENS research in the comment by David Barry here. I can't find New Harvest's 990 revenue reports, but it's donations are routed through the Network for Good, which has a total annual revenue of 150 million dollars, providing an upper bound. An annual revenue of less than 1000 dollars is very unlikely, so we can use the geometric mean of $400 000 per year as an estimated annual revenue. There are about 500 000 minutes in a year, so right now $1 brings development just over a minute closer.* There currently 24 billion chicken, 1 billion cattle, and 1 billion pigs. Assuming the current factory farm suffering rates as an estimate for suffering rates when artificial/substitute meat becomes available, and assuming (as the OP does) that animals suffer roughly equally, then bringing faux meat one minute closer prevents about (25 billion animals)/(500 000 minutes per year) = 50 animal years of suffering. If we assume that New Harvest has a 10% chance of success, $1 dollar there prevents an expected 5 animal years of suffering, or expressed as in the OP, preventing 1 expected animal year of suffering costs about 20 cents. So, these (very rough) estimates show about similar levels of effectiveness. *Assuming some set amount of money is necessary and the bottleneck and you aren't donating enough for diminishing marginal returns.
0freeze9y
There are already meat alternatives (seitan, tempeh, tofu, soy, etc.) which provide a meat-like flavor and texture. It's not immediately obvious that in-vitro meat is necessarily more effective than just promoting or refining existing alternatives. I suppose for long-run impact this kind of research may be orders of magnitude more useful though.

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian. This effect is very unlikely to be captured by surveys- and so while it's reasonable to expect the net effect to be positive, it seems reasonable to lower estimates by a bit.

(Most 'political' moves have polarizing effects; you should expect supporters to like you more, and detractors to like you less, afterwards, which seems like a better model than everyone slowly moving towards vegetarianism.)

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian.

If you take a non-vegetarian and make them more non-vegetarian, I don't think much is lost, because you never would have captured them anyway. I suppose they might eat more meat or try and persuade other people to become anti-vegetarian, but my intuition is that this effect would be really small.

But you're right that it would need to be considered.

I agree. In addition, I think people who claim that they will eat more meat after seeing a pamphlet or some other promotion for vegetarianism just feel some anger in the moment, but they'll likely forget about it within an hour or so. I can't see someone several weeks later saying to eirself, "I'd better eat extra meat today because of that pamphlet I read three weeks ago."

6A1987dM11y
BTW, how comes certain omnivores dislike vegetarians so much? All other things being equal, one fewer person eating meat will reduce its price, about which a meat-eater should be glad. (Similarly, why do certain straight men dislike gay men that much?)

If someone says that they are vegetarian for moral reasons, then it's an implicit (often explicit) claim that non-vegetarians are less moral, and therefore a status grab. If an omnivore doesn't want to become vegetarian nor to lose status, they need to aggressively deny the claim of vegetarianism being more moral.

2Vaniver11y
Vegetarianism generally includes moral claims as well as preference claims, and responding negatively to conflicting morals is fairly common. Even responding negatively to conflicting preference claims is common. This seems to happen for both tribal reasons (different tastes in music) and possibly practical reasons (drinkers disliking non-drinkers at a party, possibly because of the asymmetric lowering of boundaries). Simple tribalism is one explanation. It also seems likely to me that homophobia is a fitness advantage for men in the presence of bisexual / homosexual men. There's also some evidence that, of men who claim to be straight, increased stated distaste for homosexuals is associated with increased sexual arousal by men, which fits neatly with the previous statement- someone at higher risk of pursuing infertile / socially costly relationships should be expected to spend more effort in avoiding them.
0A1987dM11y
(Indeed, I was going to mention religion, but I forgot to. OTOH, I think I've met at least one otherwise quite contrarian person who was homophobic.) How so? By encouraging other men to pursue heterosexual relationships, I would increase the demand of straight women and the supply of straight men, which (so long as I'm a straight man myself and the supply of straight women isn't much larger than that of straight men) doesn't sound (from a selfish point of view) like a good thing. [The first time I wrote this paragraph it pattern-matched sexism because it talked about women as a commodity, so I've edited it so that it talks about both women and men as commodity, so if anything it now pattern-matches extreme cynicism; and I'm OK with that.] I've heard that cliché, but I had assumed that it was (at least in part) something someone made up to take the piss out of homophobes. Any links?
2Vaniver11y
I mean in the "revulsion to same sex attraction" sense, not the "opposed to gay rights" sense. If a man is receptive to the sexual interest of other men, that makes him less likely to have a relationship with a woman, and thus less likely to have children, and thus is a fitness penalty, and so a revulsion that protects against that seems like a fitness advantage. Here's one.
0A1987dM11y
I was thinking about straight men who dislike gay men whether or not they have been hit on by them. Thanks for the link. (Anyway... Is someone downvoting this entire subthread?)
2TheOtherDave11y
Are you asking more broadly why people in unmarked cases dislike being treated as though they were a marked case? Or have I overgeneralized, here?
0A1987dM11y
I'm asking more broadly why people dislike it when market demand for something they like decreases. (After reading the other replies, I guess that's at least partly because liking stuff with low market demand is considered low-status.)
4elharo11y
In at least some cases, network effects come into play. For example, if I prefer a non-mainstream operating system or computer hardware, there will be less support for my platform of choice. For instance, I may like Windows Phone but I can't get the apps for it that I can for the iPhone or Android. Furthermore, my employer may give me a choice of iPhone or Android but not Windows. Thus someone who prefers Windows Phone would want demand for Windows Phone to increase. Furthermore, supply is not always fixed. For products for which manufacturers can increase output to match demand, increasing demand may increase availability because more retailers will make them available. If economies of scale come into play, increasing demand may also decrease price.
0A1987dM11y
Good point, though in this particular example, I guess meat eaters aren't anywhere near few enough for these effects to be relevant.
2TheOtherDave11y
OK. I observe that both of the examples you provide (vegetarians and homosexuals) have a moral subtext in my culture that many other market-demand scenarios (say, a fondness for peanuts) lack. That might be relevant.
0A1987dM11y
(None of the vegetarians I've met seemed to be particularly bothered when other people ate meat, but as far as I can remember none of them was from the US¹, and from reading other comments in this thread I'm assuming it's different for certain American vegetarians.) ---------------------------------------- 1. Though I did met a few from an English-speaking country (namely Australia), and there are a few Canadians I met for whom I can't remember off the top of my head whether they ate meat.
2TheOtherDave11y
Fair enough. If there isn't a moral subtext to vegeterianism in your culture, but omnivores there still dislike vegetarians, that's evidence against my suggestion.
2A1987dM11y
I have seen plenty of ‘jokes’ insulting vegetarians in Italian on Facebook; but then again, I've seen at least one about the metric system too, so maybe there are people who translate stuff from English no matter how little sense they make in the target cultural context.
1Eugine_Nier11y
What army said is not the same thing. Most of the vegetarians I know also don't seem particularly bothered when other people ate meat but will nonetheless give moral reasons if asked why they don't eat meat.
0TheOtherDave11y
In isolation, I completely agree. In context, though... well, I said that vegetarians have a moral subtext in my culture, and army1987 replied that vegetarians they've met weren't bothered by others eating meat. I interpreted that as a counterexample... that is, as suggesting vegetarians don't have a moral subtext. If I misinterpreted, I of course apologize, but I can't come up with another interpretation that doesn't turn their comment into a complete nonsequitor, which seems an uncharitable assumption. If you have a third option in mind for what they might have meant, I'd appreciate you elaborating it.
-5Eugine_Nier11y
-1Eugine_Nier11y
See also economies of scale.
1Eugine_Nier11y
This has to do with the way gay sex interacts with status.

Since all of my work output goes to effective altruism, I can't afford any optimization of my meals that isn't about health x productivity. This does sometimes make me feel worried about what happens if the ethical hidden variables turn out unfavorably. Assuming I go on eating one meat meal per day, how much vegetarian advocacy would I have to buy in order to offset all of my annual meat consumption? If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

-2Said Achmiz11y
I have to disagree on two points: 1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd. 2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

-2Said Achmiz11y
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes. As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.

I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.

I think we should be wary of reasoning that takes the form: "There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x."

1Said Achmiz11y
Certainly we should, but that was not my reasoning. What I said was: I object to treating an issue as settled and uncontroversial when it's not. And the implication was that if this issue is not settled here, then it's likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population. "I will act as if this is a settled issue" in such a case is an attempt to take an epistemic shortcut. You're skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don't resort to such tricks here. If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it's not as if this isn't a receptive audience. (Or, if there are such posts and I've just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
5davidpearce11y
SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?
-3Said Achmiz11y
I certainly wouldn't, and here's why. Mentioning "non-human animals" in the same sentence and context along with humans and AIs, and "other intelligences" (implying that non-human animals may be usefully referred to as "intelligences", i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don't impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there's clearly a claim hidden in that statement, and I'd like to see it made quite explicit, at least, even if you think it's not worth arguing for. That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)
2Pablo11y
Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it's wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan--a leading ethicist and political philosopher--mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive). I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong. Frankly, I'm baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There's plenty of good material out there which I'm happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
1Said Achmiz11y
Citation needed. :) It's interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the "playing God" and "against Nature" objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem's Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.) But one thing he doesn't do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it's an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes "animal suffering is bad" as an axiom. Well, fair enough, but if I don't share that axiom, you wouldn't expect me to be convinced by his arguments, yes? I don't think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like... "intellectuals/educated people/technophiles/etc. in general", and then even more broadly than that, "people in general". However, this is a peripheral issue, so I'm ok with dropping it. In case it wasn't clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I'd appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such tha
5Pablo11y
(FWIW, I'm not the one downvoting your comments, and I think it's a shame that the debate has become so "politicized".) Here are a couple of relevant survey articles: * Jeff McMahan, Animals, in The Blackwell Companion to Applied Ethics, Oxford: Blackwell, 2002, pp. 525-536. * Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905. On the seriousness of suffering, see perhaps * Thomas Nagel, Pleasure and Pain, in The View from Nowhere, Oxford: Oxford University Press, 1986, pp. 156-162. -- Here are some quotes about pain from contemporary moral philosophers which I believe are fairly representative. (I don't have any empirical studies to back this up, other than my impression from interacting with this community for several years, and my inability to find even a single quote that supports the contrary position.) Guy Kahane, The Sovereignty of Suffering: Reflections on Pain’s Badness, 2004, p. 2 Jamie Mayerfeld, Suffering and Moral Responsibility, Oxford, 2002, p. 111. John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117 Michael Huemer, Ethical Intuitionism, Basingstoke, Hampshire, 2005, p. 250. James Rachels, ‘Animals and Ethics’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy, London, 1998, sect. 3.
2Said Achmiz11y
Thank you! This is an impressive array of references, and I will read at least some of them as soon as I have time. I very much appreciate you taking the time to collect and post them. Thank you. The downvotes don't worry me too much, at least partly because I continue to be unsure about what down/upvotes even mean on this site. (It seems to be an emotivist sort of yay/boo thing? Not that there's necessarily anything terribly wrong with that, it just doesn't translate to very useful data, especially in small quantities.) To anyone who is downvoting my comments: I'd be curious to hear your reasons, if you're willing to explain them publicly. Though I do understand if you want to remain anonymous.
0Said Achmiz11y
So, I've just finished reading this one. To say that I found it unconvincing would be quite the understatement. For one, Rachels seems entirely unwilling to even take seriously any objections to his moral premises or argument (he, again, takes the idea that we should care about animal suffering as given). He dismisses the strongest and most interesting objections outright; he selects the weakest objections to rebut, and condescendingly adds that "Resistance to [such] arguments usually stems from emotion, not reason. ... Moreover, they [opponents of his argument] want to justify their next hamburger." Rachels then launches into a laundry list of other arguments against eating factory farmed animals, not based on a moral concern for animals. It seems that factory farming is bad in literally every way! It's bad for animals, it's bad for people, it causes diseases, eating meat is bad for our health, and more, and more. (I'm always wary of such claims. When someone tells you thing A has bad effect X, you listen with concern; when they add that oh yeah, it also had bad effect Y! And Z! And W! ... and then you discover that their political/ideological alignment is "opponent of thing A"... suspicion creeps in. Can eating meat really just be universally bad, bad in every way, irredeemably bad so as to be completely unmotivated? Well, there's no law of nature that says that can't be the case (e.g. eating uranium probably has no upside), but I'm inclined to treat such claims with skepticism, and, in any case, I'd prefer each aspect of meat-eating to be argued against separately, such that I can evaluate them individually, not be faced with a shotgun barrage of everything at once.) Incidentally, I find the "factory farming is detrimental to local human populations" argument much more convincing than any of the others, certainly far more so than the animal-suffering argument. If the provided facts are accurate, then that's the most salient case for stopping the practice — o
5wedrifid11y
"Partially hydrogenated vegetable oils prevent heart disease and improve lipid profile". To the extent that it is true that it is trivial to find someone claiming the opposite of every nutritional claim it is trivial to find people who are clearly just plain wrong. (The position you are taking is far too strong to be tenable.)
0Said Achmiz11y
The opposite claim of "Food X causes problem Y" is not necessarily "Food X reduces problem Y". "It is not the case that (or "there is no evidence that") Food X causes problem Y" also counts as "opposite". That's how I meant it: every time someone says "X causes Y", there's some other study that concludes that eh, actually, it's not clear that X causes Y, and in fact probably doesn't.
4davidpearce11y
SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done "us" any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don't regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the "euthanasia" program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it's offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. "An eternal Treblinka", for example, was made by Isaac Bashevis Singer - the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
1Said Achmiz11y
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is "and why shouldn't we do this...?", which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there's no moral problem with doing this, so it needs no "justification". I make it clear in this post that I don't deny the equivalence, and don't think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.) Well, I certainly do. Eh...? Expand on this, please; I'm quite unsure what you mean here.
3davidpearce11y
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
8Watercressed11y
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there's no reason they must care about the plight of animals in the face of humans, because they didn't care about animals to begin with. It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
3Said Achmiz11y
What the heck does this mean? (And why should I be interested in having it?) Wikipedia says: If that's how you're using "sentience", then: 1) It's not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn't seem central to moral worth. So I see no irony. If you use "sentience" to mean something else, then by all means clarify. There are some other problems with your formulation, such as: 1) I don't "belong to" MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate. You use a lot of terms ("cognitively ambitious", "cognitively humble", "empathetic understanding", "Godlike capacity for perspective-taking" (and "the computation equivalent" thereof)) that I'm not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I'm not sure which interpretation is dictated by the principle of charity here; I don't want to just assume that I know what you're talking about. So, if you please, do clarify what you mean by... any of what you just said.
-1A1987dM11y
Huh, no, you don't normally go out of your way to do stuff unless there's something in it for you or someone else.
3Said Achmiz11y
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You're walking along the street and you kick a bottle that happens to turn up in your path. What's it in for you? In the most trivial sense you could say that "I felt like it" is what's in it for you, but then the concept rather loses its meaning. In any case, that's a tangent, because you mistook my meaning: I wasn't talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: "Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh." (Implied corollary: and that is inadequate moral justification!) To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don't need to justify that (unless the rock was someone's property, I suppose, which is not the issue we're discussing). I might have any number of motivations for performing a morally neutral act, but they're none of anyone's business, and certainly not an issue for moral philosophers. (Did you really not get all of this intended meaning from my comment...? If that's how you intepreted what I said, shouldn't you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)
4RobertWiblin11y
"Public declarations would only be signaling, having little to do with maximizing good outcomes." On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David. "I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one." a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
0[anonymous]11y
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
0Said Achmiz11y
Where is the best content on this topic, in your opinion? Eh? Unpack this, please.

If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

That's not exactly true, since advocating vegetarianism has more effects than simply reducing the consumption of meat. For one thing, it alters how people think about and live their lives. If that $30 of spending produces a certain amount of human suffering (say, from self-induced guilt over eating meat), then your ethicalness isn't as high as calculated.

9Peter Wildeford11y
Allegedly, vegetarian diets are supposed to be healthier, but I don't know if that's true. I also don't know how much of a productivity drain, if any, a vegetarian diet would be. I've personally noticed no difference. ~ It depends on what the cost-effectiveness ends up looking like, but $30 sounds fine to me. Additionally or alternatively, you could eat larger animals instead of smaller animals (i.e. more beef and less chicken) so as to do less harm with each meal.
3Mestroyer11y
If the ethical hidden variables turn out unfavorably, you have more to make up for than that. HPJEV thinking animals are not sentient has probably lost the world more than one vegetarian-lifetime.
1Eliezer Yudkowsky11y
This seems unlikely to be a significant fraction of my impact upon the summum bonum, for good or ill.
4Raemon11y
I'm actually fairly concerned about the possibility of you influencing the beliefs of AI researchers, in particular. I'm not sure if it ends up mattering for FAI, if executed as currently outlined. My understanding is that the point is that it'll be able to predict the collective moral values of humanity-over-time (or safely fail to do so), and your particular guesses about ethical-hidden-variables shouldn't matter. But I can imagine plausible scenarios where various ethical-blind-spots on the part of the FAI team, or people influenced by it, end up mattering a great deal in a pretty terrifying way. (Maybe people in that cluster decide they have a better plan, and leave and do their own thing, where ethical-blind-spots/hidden-variables matter more). This concern extends beyond vegetarianism and doesn't have a particular recommended course of action beyond "please be careful about your moral reasoning and public discussion thereof", which presumably you're doing already, or trying to.
9Eliezer Yudkowsky11y
FAI builders do not need to be saints. No sane strategy would be set up that way. They need to endorse principles of non-jerkness enough to endorse indirect normativity (e.g. CEV). And that's it. Morality is not sneezed into AIs by contact with the builders.
8Mestroyer11y
Haven't you considered extrapolating the volition of a single person if CEV for many people looks like it won't work out, or will take significantly longer? Three out of three non-vegetarian LessWrongers (my best model for MIRI employees, present and future, aside from you) I have discussed it with say they care about something besides sentience, like sapience. Because they have believed that that's what they care about for a while, I think it has become their true value, and CEV based on them alone would not act on concern for sentience without sapience. These are people who take MWI and cryonics seriously, probably because you and Robin Hansen do and have argued in favor of them. And you could probably change the opinion of these people, or at least people on the road to becoming like them with a few of blog posts. Because in HPMOR you used the word "sentience," which is typically used in sci fi to mean sapience, (instead of using something like "having consciousness") I am worried you are sending people down that path by letting them think HPJEV draws the moral-importance line at sapience, besides my concern that you are showing others that a professional rationalist thinks animals aren't sentient.
2Raemon11y
I did finally read the 2004 CEV paper recently, and it was fairly reassuring in a number of ways. (The "Jews vs Palestinians cancel each other but Martin Luther King and Gandhi add together" thing sounded... plausible but a little too cutely elegant for me to trust at first glance.) I guess the question I have is (this is less relevant to the current discussion but I'm pretty curious) - in the event where CEV fails to produce a useful outcome (i.e. values diverge too much), is there a backup plan, that doesn't hinge on someone's judgment? (Is there a backup plan, period?)
0[anonymous]11y
Indirect Normativity is more a matter of basic sanity than non-jerky altruism. I could be a total jerk and still realize that I wanted the AI to do moral philosophy for me. Of course, even if I did this, the world would turn out better than anyone could imagine, for everyone. So yeah, I think it really has more to do with being A) sane enough to choose Indirect Normativity, and B) mostly human. Also, I would regard it as a straight-up mistake for a jerk to extrapolate anything but their own values. (Or a non-jerk for that matter). If they are truly altruistic, the extrapolation should reflect this. If they are not, building altruism or egalitarianism in at a basic level is just dumb (for them, nice for me). (Of course then there are arguments for being honest and building in altruism at a basic level like your supporters wanted you to. Which then suggests the strategy of building in altruism towards only your supporters, which seems highly prudent if there is any doubt about who we should be extrapolating. And then there is the meta-uncertain argument that you shouldn't do too much clever reasoning outside of adult supervision. And then of course there is the argument that these details have low VOI compared to making the damn thing work at all. At which point I will shut up.)
2Decius11y
Wouldn't that $30 come from your work output that is currently going to effective altruism?
3Eliezer Yudkowsky11y
Arguably worth it for $30 of reduced guilt, bragging rights and twisted, warped enjoyment of ethical weirdness.
-2Decius11y
Using the worst estimate, that would mean that it's arguable that a 1 in 50 chance of killing a child under 5 is worth that much reduced guilt, bragging rights, and twisted, warped enjoyment of ethical weirdness. I'd call you a monster, but I'd totally take actions which fail to prevent the death of an entire kid I'd never meet anyway if I could do so without suffering any risk of being blamed and could get a warped enjoyment of ethical weirdness. We monsters.

Several people have been attempting to reductio my pro-human point of view, so I'll do the same back to the pro-animal people here: how simple is the simplest animal you're willing to assign moral worth to? Are you taking into account meta-uncertainty about the moral worth of even very simple animals? (What about living organisms outside of the animal kingdom, like bacteria? Viruses?) If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth? Does it seem "mindist" to you to single out having a particular kind of mind as being the thing that signifies moral worth?

If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?

If you were the only human left on Earth and you couldn't find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?

How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?

how simple is the simplest animal you're willing to assign moral worth to?

I don't value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth?

No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.

If you calculated that assigning ev

... (read more)

I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It's a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?

4Jabberslythe11y
I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers. Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option. There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
9Pablo11y
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
4Jabberslythe11y
Very interesting. What were they experts in? And how many people responded?
5Pablo11y
They were experts in pain perception and related fields. We sent the survey to about 25 people, of whom 13 responded. Added (6 November, 2015): If there is interest, I can reconstruct the list of experts we contacted. Just let me know.
3Lukas_Gloor11y
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already. Another thing to consider is that insects are a diverse bunch. I'm virtually certain that some of them aren't conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
1Jabberslythe11y
Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.
1TheOtherDave11y
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
1Jabberslythe11y
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for. There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I'd link his site but it doesn't seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have. Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
0TheOtherDave11y
Thanks.
0Lukas_Gloor11y
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you're even looking for. Nociception seems to be a necessary criterion, but it's not sufficient. In addition, I suspect that consciousness' adaptive role has to do with the weighting of different "possible" behaviors, so there has to be some learning behavior or variety in behavioral subroutines. I actually give some credence to extreme views like Dennett's (and also Eliezer's if I'm informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here. And regarding the number of neurons thing, there I'm basically just going by intuition, which is unfortunate so I should think about this some more.
4davidpearce11y
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
0TheOtherDave11y
OK, thanks for clarifying.
0Lukas_Gloor11y
Good point, there is reason to expect that I'm just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then "non-negligible" would indeed be an understatement for insect suffering. However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you're primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
3Qiaochu_Yuan11y
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?

I don't see the relevance of this question, but judging by the upvotes it received, it seems that I'm missing something.

I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn't. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing "weird" things, I would do weird things.

2Qiaochu_Yuan11y
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you're willing to bite the bullet here, then at least you're being consistent (although I'm no longer sure that consistency is such a great property of a moral system for humans).

1) I am okay with humanely raised farm meat (I found a local butcher shop that sources from farms I consider ethical)

2) If I didn't have access to civilization, I would probably end up hunting to survive, although I'd try to do so as rarely and humanely as was possible given my circumstances. (I'm only like 5% altruist, I just try to direct that altruism as effectively as possible and if push comes to shove I'm a primal animal that needs to eat. I'm skeptical of people who claim otherwise)

3) I'm currently okay with eating insects, mussels, and similar simplish animals, where I can make pretty good guesses about the lack of sentience of. (If insects do turn out to have sentience, that's a pretty inconvenient world to have to live in, morally.)

4) I'm approximately average-preference-utilitarian. I value there being more creatures with more complex and interesting capacities for preference satisfaction (this is arbitrary and I'm fine with that). If I had to choose between humans and animals, I'd choose humans. But that's not the choice offered to humans RE vegetarianism - what's at stake is not humanity and complex relationships/art/intellectual-endeavors - it's pretty straightforward... (read more)

5Swimmer963 (Miranda Dixon-Luinenburg) 11y
This is pretty much the case for me. I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for "it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet's resources, everyone should eat more plants and less meat." I consistently ended up with low iron and B12. It's possible to get enough iron, B12, and protein as a vegetarian, but you do have to plan your meals a bit more carefully (i.e. always have beans with rice so you get complete protein) and possibly eat foods that you don't like as much. Right now I cook about one dish with meat in it per week, and I haven't had any iron or B12 deficiency problems since graduating high school 4 years ago. In general, I optimize food for low cost as well as health value and ethics, but if in-vitro meat became available, I think this is valuable enough in the long run that I would be willing to "subsidize" its production and commercialization by paying higher prices.
-1maia11y
Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...
4wedrifid11y
That reasoning does not seem to be either unique to or particularly prevalent on lesswrong.
0maia11y
Fair enough. I've never encountered it elsewhere, myself.
2wedrifid11y
(Typically it is expressed as an additional excuse/justification for the political and personal position being taken for unrelated reasons.)
2Said Achmiz11y
Could you (very briefly) expand on this, or even just give a link with a reasonably accessible explanation? I am curious.
3MTGandP11y
From the American Dietetic Association: http://www.ncbi.nlm.nih.gov/pubmed/19562864
0Said Achmiz11y
Interesting, thank you.
2MugaSofer11y
Well, considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment. I don't have any sources or anything, and I'm pretty lazy, but I've been vegetarian since childhood, and never had any health problems as a result AFAICT.
5Said Achmiz11y
I am entirely willing to take your word on this, but you know what they say about "anecdote" and declensions thereof. In this case specifically, one of the few things that seem to be reliably true about nutrition is that "people are different, and what works for some may fail or be outright disastrous for others". In any case, Raemon seemed to be making a weaker claim than "vegetarianism has no serious health downsides". "Healthy portions of meat amount to far less than the 32 oz steak a day implied by some anti-vegetarian doomsayers" is something I'm completely willing to grant.
2MugaSofer11y
Fair enough.
2elharo11y
Considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment supported by modern agriculture that produces large quantities of concentrated non-meat protein in the form of tofu, eggs, whey protein, beans, and the like. This may be a happy accident. Are there any vegetarian hunter-gatherer societies?
5TheOtherDave11y
Wouldn't these be "gatherer societies" pretty much definitionally?
2wedrifid11y
(Unless there are Triffids!)
1TheOtherDave11y
Obligatory Far Side reference
0Nornagest11y
I've been having a hell of a time finding trustworthy cites on this, possibly because there are so many groups with identity stakes in the matter -- obesity researchers and advocates, vegetarians, and paleo diet adherents all have somewhat conflicting interests in ancestral nutrition. That said, this survey paper describes relatively modern hunter-gatherer diets ranging from 1% vegetable (the Nunamiut of Alaska) to 74% vegetable (the Gwi of Africa), with a mean somewhere around one third; no entirely vegetarian hunter-gatherers are described. This one describes societies subsisting on up to 90% gathered food (I don't know whether or not this is synonymous with "vegetable"), but once again no exclusively vegetarian cultures and a mean around 30%. I should mention by way of disclaimer that modern forager cultures tend to live in marginal environments and these numbers might not reflect the true ancestral proportions. And, of course, that this has no bearing either way on the ethical dimensions of the subject.
2Raemon11y
I'm having trouble finding... any kind of dietary information that isn't obviously politicized (in any direction) right now. But basically, when people think of a "serving" of meat, they imagine a large hunk of steak, when in fact a serving is more like the size of a deck of cards. A healthy diet has enough things going on in it besides meat that removing meat shouldn't feel like it's gutting out your entire source of pleasure from food.
1Said Achmiz11y
Ah. Yeah, I don't eat meat in huge chunks or anything. But meat sure is delicious, and comes in a bunch of different formats. Obviously removing meat would not totally turn my diet into a bleak, gray desert of bland gruel; I don't think anyone would claim that. But it would make it meaningfully less enjoyable, on the whole.
2Qiaochu_Yuan11y
This all seems pretty reasonable (except that I don't think the validity of a human preference has much to do with how difficult it is for non-humans to have the same preference).
-3MugaSofer11y
This fact seems to outweigh the rest of your comment.
7Vaniver11y
Bugs, both true and not, are most definitely part of the animal kingdom.
0Qiaochu_Yuan11y
Whoops. Edited.
4Xodarap11y
It doesn't seem like you're really criticizing "pro-animal people" - you're just critiquing utilitarianism. (e.g. "Is it arbitrary to state that suffering is bad?" "What if you could help others only at great expense to yourself?") Supposing one does accept utilitarian principles, is there any reason why we shouldn't care about the suffering of non-humans?
-1Qiaochu_Yuan11y
This is half a criticism and half a reflection of arguments that have been used against my position that I think are problematic. To the extent that you think these arguments are problematic, I probably agree. Resources spent on alleviating the suffering of non-humans are resources that aren't spent on alleviating the suffering of humans, which I value a lot more.
1elharo11y
That's a false dichotomy. Resources that stop being spent on alleviating the suffering of non-humans do not automatically translate into resources that are spent on alleviating the suffering of humans. Nor is it the case that there are insufficient resources in the world today to eliminate most human suffering. The issue there is purely one of distribution of wealth, not gross wealth.
0Qiaochu_Yuan11y
Yes, but they're less available. Maybe I triggered the wrong intuition with the word "resources." I had in mind resources like the time and energy of intelligent people, not resources like money. I think it's plausible to guess that time and energy spent on one altruistic cause really does funge directly against time and energy spent on others, e.g. because of good-deed-for-the-day effects.
1Xodarap11y
Why? (Keeping in mind that we have agreed the basic tenets of utilitarianism are correct: pain is bad etc.)
2Qiaochu_Yuan11y
Oh. No. Human pain is bad. The pain of sufficiently intelligent animals might also be bad. Fish pain and under is irrelevant.
8Pablo11y
There is nothing inconsistent about valuing the pain of some animals, but not of others. That said, I find the view hard to believe. When I reflect on why I think pain is bad, it seems clear that my belief is grounded in the phenomenology of pain itself, rather than in any biological or cognitive property of the organism undergoing the painful experience. Pain is bad because it feels bad. That's why I think pain should be alleviated irrespective of the species in which it occurs.
0Qiaochu_Yuan11y
I don't share these intuitions. Pain is bad if it happens to something I care about. I don't care about fish.
4Pablo11y
I don't care about fish either. I care about pain. It just so happens that fish can experience pain.
-1Nornagest11y
Truthfully, I'm not even sure I believe pain is bad in the relevant sense. It's certainly something I'd prefer to avoid under most circumstances, but when I think about it in detail there always ends up being a "because" in there: because it monopolizes attention, because in sufficient quantity it can thoroughly screw up your motivational and emotional machinery, because it's often attached to particular actions in a way that limits my ability to do things. It doesn't feel like a root-level aversion to my reasoning self: when I've torn a ligament and can't flex my foot in a certain way without intense stabbing agony, I'm much more annoyed by the things it prevents me from doing than by the pain it gives me, and indeed I remember the former much better than the latter. I haven't thought this through rigorously, but if I had to take a stab at it right now I'd say that pain is bad in roughly the same way that pleasure is good: in other words, it works reasonably well as a rough experiential pointer to the things I actually want to avoid, and it does place certain constraints on the kind of life I'd want to live, but I'd expect trying to ground an entire moral system in it to give me some pretty insane results once I started looking at corner cases.
-3Xodarap11y
You probably don't want to draw the line at fish.
0Qiaochu_Yuan11y
What point are you trying to make with that link?
2Swimmer963 (Miranda Dixon-Luinenburg) 11y
Probably that fish don't seem to be hugely different from amphibians/reptiles, birds, and mammals in terms of the six substitute-indicators-for-feeling-pain, and so it's hard to say whether their pain experience is different. I would agree that fish pain is less relevant than human pain (they have a central nervous system, yes, but less of one, and a huge part of what makes human pain bad is the psychological suffering associated with it).
2Qiaochu_Yuan11y
My claim was that I don't care about fish pain, not that fish pain is too different from human pain to matter. Rather, fish are too different from humans to matter.
1MugaSofer11y
Could you expand on this idea?
0Swimmer963 (Miranda Dixon-Luinenburg) 11y
Fair enough. I think "too X to matter" is a complex concept, though.
-4Xodarap11y
How is the statement "fish and humans feel pain approximately equally" different from the statement "we should care about fish and human pain approximately equally?"
1shminux11y
You and I feel pain approximately equally, but I care about mine a lot more than about yours.
1MugaSofer11y
Do you consider this part of morality? I mean, I personally experience selfish emotions, but I usually, y'know, try to override them?
6Nornagest11y
Most people probably wouldn't consider that moral as such (though they'd likely be okay with it on pragmatic grounds), but the more general idea of treating some people's pain as more significant than others' is certainly consistent with a lot of moral systems. Common privileged categories: friends, relatives, children, the weak or helpless, people not considered evil.
2shminux11y
It's perfectly moral for me to be selfish to some degree, yes. I cannot care about others if I don't care about myself. You might work differently, but utter unselfishness seems like an anomaly.
2wedrifid11y
It also seems like a lie (to the self or to others).
0Xodarap11y
Fair enough. To restate but with different emphasis: "we should care about fish and human pain approximately equally?"
1Qiaochu_Yuan11y
"I care about X's pain" is mostly a statement about X, not a statement about pain. I don't care about fish and I care about humans. You may not share this moral preference, but are you claiming that you don't even understand it?
-2Xodarap11y
No, I have a lot of biases like this: the halo effect makes me think that humans' ability to do math makes our suffering more important, "what you see is all there is" allows me to believe that slaughterhouses which operate far away must be morally acceptable, and so forth. Anyway, fish suffering isn't a make-or-break decision. People very frequently have the opportunity to choose a bean burrito over a chicken one (or even a beef burrito over a chicken one), and from what Peter has presented here it seems like this is an extremely effective way to reduce suffering.
2Xodarap11y
I may be misunderstanding you, but I thought you were suggesting that there is a non-arbitrary set of physiological features that vertebrates share but fish don't. I was pointing out that this doesn't seem to be the case.
0Qiaochu_Yuan11y
No, I'm suggesting that I don't care about fish.
1MugaSofer11y
Can't speak for all vegetarians/pro-animal-rights types, but I personally discount based on complexity (or intelligence of whatever.) That's not the same as discounting simpler creatures altogether - at least not when we're discussing, say, pigs. (At what point do you draw the line to start valuing creatures, by the way? Chimpanzees? Children? Superintelligent gods? Just curious, this isn't a reductio.)
4Qiaochu_Yuan11y
Right, but what's the discount rate? What does your discount rate imply is the net moral worth of all mosquitoes on the planet? All bacteria? I'm not sure where my line is either. It's hovering around pigs and dolphins at the moment.
0MugaSofer11y
I'm not sure what the discount rate is, which is largely why I asked if you were sure about where the line was. I mostly go off intuition for determining how much various species are worth, so if you throw scope insensitivity into the mix...
-1Eugine_Nier11y
Would you apply said discount rate intraspecies in addition to interspecies? By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
6Lukas_Gloor11y
Yes. Assuming that prey populations are kept from skyrocketing (e.g. through the use of immunocontraception) since that too would result in large amounts of unnecessary suffering.
6davidpearce11y
Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. http://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.
2elharo11y
This is, sadly, not a hypothetical question. This is an issue wildlife managers face regularly. For example, do you control the population of Brown-headed Cowbirds in order to maintain or increase the population of Bell's Vireo or Kirtlands Warbler? The answer is not especially controversial. The only questions are which methods of predator control are most effective, and what unintended side effects might occur. However these are practical, instrumental questions, not moral ones. Where this comes into play in the public is in the conflict between house cats and birds. In particular, the establishment of feral cat colonies causes conflicts between people who preference non-native, vicious but furry and cute predators and people who preference native, avian, non-pet species. Indeed, this is one of the problems I have with many animal rights groups such as the Humane Society. They're not pro-animal rights, just pro-pet species rights. A true concern for animals needs to treat animals as animals, not as furry baby human substitutes. We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey. A Capuchin Monkey living in a zoo safe from the threat of Harpy Eagles leads a life as limited and restricted as a human living in Robert Nozick's Experience Machine. While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.
3davidpearce11y
Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".
0elharo11y
I'm not quite sure what you're saying here. Could you elaborate or rephrase?
2KatieHartman11y
Why? Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?
1elharo11y
We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference. Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals. When you ask, "Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?" it's not clear to me whether you're referring to why we shouldn't move humans into virtual boxes or why we shouldn't move animals into virtual boxes, or both. If you're talking about humans, the answer is because we don't get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I'm still capable of living a normal life. If you're referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.

We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature.

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.

I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible.

It seems arbitrary to exclude the environment from the cluster of factors that go into living "the lives they choose." I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don't think it's too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.

Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat.

Taken with this...

We n

... (read more)

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value.

Two values being in conflict isn't necessarily inconsistent, it just mean that you have to make trade-offs.

2elharo11y
An example of the importance of predators I happened across recently: "Safer Waters", Alisa Opar, Audubon, July-August 2013, p. 52 This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they're mean, you're going to damage a lot more.
4nshepperd11y
This is an excellent example of how it's a bad idea to mess with ecosystems without really knowing what you're doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won't have any catastrophic effects down the chain. But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
4KatieHartman11y
I'd just like to point out that (a) "mean" is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of "damage" relies on the use of "healthy" to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we "damaged" a previously "healthy" system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
1Richard_Kennaway11y
If "natural lifespans" means what they would have if they weren't eaten, it's a tautology. If not, what does it mean? The shark's "natural" lifespan requires that it eats other creatures. Their "natural" lifespan requires that it does not.
0KatieHartman11y
Yes, I'm using "natural lifespan" here as a placeholder for "the typical lifespan assuming nothing is actively trying to kill you." It's not great language, but I don't think it's obviously tautological. Yes. My question is whether that's a system that works for us.
2Richard_Kennaway11y
We can say, "Evil sharks!" but I don't feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there's a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn't be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works -- witness the quoted results of overfishing shark -- strikes me as quixotic folly.
0KatieHartman11y
It strikes me as folly, too. But "Let's go kill the sharks, then!" does not necessarily follow from "Predation is not anywhere close to optimal." Nowhere have I (or anyone else here, unless I'm mistaken) argued that we should play with massive ecosystems now. I'm very curious why you don't feel any need to exterminate or modify predators, assuming it's likely to be something we can do in the future with some degree of caution and precision.
2Richard_Kennaway11y
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I'm one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn't care or need to care what future-me decides. In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they're no loss in themselves, (b) it doesn't appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
0Said Achmiz11y
There's something about this sort of philosophy that I've wondered about for a while. Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid? That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings? And more concretely: in a "we are now omnipotent gods" scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts' content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so? Or would we judge the sharks' pleasure from eating fish to be an invalid value, and simply modify them to not be predators? The shark question is perhaps a bit esoteric; but if we substitute "psychopaths" or "serial killers" for "sharks", it might well become relevant at some future date.
2KatieHartman11y
I'm not sure what you mean by "valid" here - could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn't inferior to a world where beings are deriving the same amount of utility from some other activity that doesn't affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don't do anything that actually causes suffering.
0Said Achmiz11y
Sure. By "valid" I mean something like "worth preserving", or "to be endorsed as a part of the complex set of values that make up human-values-in-general". In other words, in the scenario where we're effectively omnipotent (for this purpose, at least), and have decided that we're going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: "we'll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don't find their values to be worth satisfying, so they're going to be excluded from this"? I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let's also satisfy the values of all the paperclip maximizers. We don't find paperclip maximization to be a valid value, in that sense. So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy's values as well as those of humans? If not, how about sharks? Psychopaths? Etc.? Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool. Well, sure. But let's keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
1elharo11y
There's a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don't think most other people want to live, in a matrix world where we're all drugged to our gills with maximal levels of L-dopamine and fed through tubes. Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that's not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view. I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
0davidpearce11y
Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime. However, we don't need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer - for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.": http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.
2KatieHartman11y
I've heard this posed as a "gotcha" question for vegetarians/vegans. The socially acceptable answer is the one that caters to two widespread and largely unexamined assumptions: that extinction is just bad, always, and that nature is just generally good. If the questioned responds in any other way, he or she can be written off right there. Who the hell thinks nature is a bad thing and genocide is a good thing? But once you get past the idea that nature is somehow inherently good and that ending any particular species is inherently bad, there's not really any way to justify allowing the natural world to exist the way it does if you can do something about it.
0Jiro11y
It's a "gotcha" question for vegetarians because vegetarians in the real world are seldom vegetarians in a vacuum; their vegetarianism is typically associated and based on a cloud of other ideas that include respect for nature. In other words, it's not a "gotcha" because you would write off the vegetarian who believes it, it's because believing it would undermine his own core, but illogical and unstated, motives.
0A1987dM11y
The former effect would generally be a heckuva lot smaller than the latter.
1shminux11y
I'm parsing this as follows: I don't have a good intuition on whose suffering matters, and unbounded utilitarianism is vulnerable to the Repugnant Conclusion, so I will pick an obvious threshold: humans and decide to not care about other animals until and unless the reason to care arises. EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
5Qiaochu_Yuan11y
Have you read The Narrowing Circle?
5shminux11y
I tried. But it's written in extreme Gwernian: well researched, but long, rambling and without a decent summary upfront. I skipped to the (also poorly written) conclusion, missing most of the arguments, and decided that it's not worth my time. The essay would be right at home as a chapter in some dissertation, though. Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?
3Qiaochu_Yuan11y
What I mostly got out of it is that there are two big ways in which the circle of things with moral worth has shrunk rather than grown throughout history: it shrunk to exclude gods, and it shrunk to exclude dead people. I'm not sure what your comment was intended to be, but if it was intended to be a summary of the point I was implicitly trying to make, then it's close enough.
1MugaSofer11y
... are you including chimpanzees there, by any chance?
0TheOtherDave11y
"Cute" I'll give you. "Harmless" I'm not sure about. That is, it's not in the least bit clear to me that I can reliably predict, from species S being harmful and cute, that the Schelling point you describe won't/hasn't shifted so as to include S on the cared-about side. For clarity: I make no moral claims here about any of this, and am uninterested in the associated moral claims, I'm just disagreeing with the bare empirical claim.
-2Eugine_Nier11y
I think it's simply a case of more animals moving into the harmless category as our technology improves.
0elharo11y
The value of a species is not merely the sum of the values of the individual members of the species. I feel a moral obligation to protect and not excessively harm the environment without necessarily feeling a moral obligation to prevent each gazelle from being eaten by a lion. There is value in nature that includes the predator-prey cycle. The moral obligation to animals comes from their worth as animals, not from a utilitarian calculation to maximize pleasure and minimize pain. Animals living as animals in the wild (which is very different than animals living in a farm or as pets) will experience pleasure and pain; but even the ones too low on the complexity scale to feel pleasure and pain have value and should have a place to exist. I don't know if an Orange Roughy feels pain or pleasure or not; but either way it doesn't change my belief that we should stop eating them to avoid the extinction of the species. The non-hypothetical, practical issue at hand is not do we make the world a better place for some particular species, but do we stop making it a worse one? Is it worth extinguishing a species so a few people can have a marginally tastier or more high status dinner? (whales, sharks, Patagonian Toothfish, etc.) Is it worth destroying a few dozen acres of forest containing the last habitat of a microscopic species we've never noticed so a few humans can play golf a little more frequently? I answer No, it isn't. It is possible for the costs of an action to non-human species to outweigh the benefits gained by humans of taking that action.
2Qiaochu_Yuan11y
Why? What worth? Where does this belief come from?
-2[anonymous]11y

I asked this before but don't remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?

What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?

Huh. I'm drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said "human suffering is more important", not "there are some classes of animals that suffer less". I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.

4Qiaochu_Yuan11y
Because one of those species is mine? Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)? There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature. I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.

Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.

3Qiaochu_Yuan11y
I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis: I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are). If I write a computer program with a variable called isSuffering that I set to true, is it suffering? Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
8Zack_M_Davis11y
(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing. That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.
3Rob Bensinger11y
I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind. I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.) I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
1Qiaochu_Yuan11y
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth). Nope. I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.
2Rob Bensinger11y
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)
4Qiaochu_Yuan11y
I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't. A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
4Rob Bensinger11y
I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards. I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured. (Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.) On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an e
-1Eugine_Nier11y
What about unconscious people? So what's your position on abortion?
1Rob Bensinger11y
I don't know why you got a down-vote; these are good questions. I'm not sure there are unconscious people. By 'unconscious' I meant 'not having any experiences'. There's also another sense of 'unconscious' in which people are obviously sometimes unconscious — whether they're awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for 'bare consciousness', but it's not necessary, since people can experience dreams while 'unconscious'. Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly 'switches off' — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like 'Do we have a responsibility to make conscious beings come into existence?' and 'Do we have a responsibility to fulfill people's wishes after they die?'. I'd lean toward 'yes' on the former, 'no but it's generally useful to act as though we do' on the latter. Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It's conceivable that there's no true consciousness at all until after birth — analogously, it's possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
-1Qiaochu_Yuan11y
The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren't so capable. I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI). I'm skeptical of the claim that any fish have societies in a meaningful sense. Citation? If they're intelligent enough we can still trade with them, and that's fine. I don't think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other. Yes: not capturing complexity of value. Again, morality doesn't behave like science. Looking for general laws is not obviously a good methodology, and in fact I'm pretty sure it's a bad methodology.
3Rob Bensinger11y
'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex. In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence -- a more detailed map can be wrong about the territory in more ways. Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. "Looking for general laws" is a good idea here for the same reason it's a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we're not complicating our theory in arbitrary or unnecessary ways. Knowing at the outset that storms are complex doesn't mean that we shouldn't try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
2wedrifid11y
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). "Too simple" is a valid objection if the premise "Not simple" is implied.
0Rob Bensinger11y
That's assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that's the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we're talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won't learn as much from the areas where your map fails. 'Value is complex' is compatible with the utility of starting with simple models, particularly since we don't yet know in what respects it is complex.
0Qiaochu_Yuan11y
Obviously that's not what I'm suggesting. What I'm suggesting is that it's both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better. What data?
2A1987dM11y
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you're much bigger than an atom and much slower than light).
1Rob Bensinger11y
Isn't a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don't know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy -- and are likely to become far fuzzier as we take more control of our genetic future. We also know that what's normal for a certain species can vary wildly over historical time. 'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species. It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or 'feels'?) distant, yet completely intolerable in contexts where this external technology is more 'near' on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here? I don't find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort. Actually, now that you bring it up, I'm surprised by how similar the two are. 'Heuristics' by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the in
5Qiaochu_Yuan11y
I don't think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.) You're still using a methodology that I think is suspect here. I don't think there's good reasons to expect "everything that feels pain has moral value, period" to be a better moral heuristic than "some complicated set of conditions singles out the things that have moral value" if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2. Your intuition, not mine. System 1 doesn't know what a biological human is. I'm not using "human" to mean "biological human." I'm using "human" to mean "potential friend." Posthumans and sufficiently intelligent AI could also fall in this category, but I'm still pretty sure that fish don't. I actually only care about the second principle. While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn't be having this conversation if there were such a thing as a moral experiment; I'd be happy to defer to the evidence in that case, the same as I would in any scientific field where I'm not a domain expert.)
6Rob Bensinger11y
Thanks for fleshing out your view more! It's likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you've done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it's likely that no simple well-defined list of traits would provide a crisp criterion for what 'friendship' or 'potential friendship' means to you. It's just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all. The reliance on especially poorly-defined and essentializing categories bothers me, but I'll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It's not all-or-nothing. Allowing that it's not all-or-nothing lets us escape most of your view's problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn't be a 'friend' or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn't be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines be
2Qiaochu_Yuan11y
This is a good point. I'll have to think about this.
0[anonymous]11y
This is quite a good post, thanks for taking the time to write it. You've said before that you think vegetarianism is the morally superior option. While you've done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter? I ask in part because I don't think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We've historically had both problems, and I don't know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say). EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
1Rob Bensinger11y
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it's true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being. It's hard to talk about this in the abstract, so maybe you should say more about what you're worried about, and (ideally) about some alternative that avoids the problem. It sounds like you're suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
0[anonymous]11y
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I'd be happy to be pointed in the right direction). My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu's motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals. So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people. The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they're owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not s
0ialdabaoth11y
The term you are looking for here is 'person'. The debate you are currently having is about what creatures are persons. The following definitions aid clarity in this discussion: * Animal - a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient). * Human - a member of the species Homo sapiens, a particular type of hairless ape * Person - A being which has recognized agency, and (in many moral systems) specific rights. Note that separating 'person' from 'human' allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
0Rob Bensinger11y
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain's functioning. You are deferring to evidence; I just haven't given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven't bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you're some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn't be asking me for arguments at all. However, because we're primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective -- we're experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
0Qiaochu_Yuan11y
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish. Agreed, but this is why I think the analogy to science is inappropriate.
2Rob Bensinger11y
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please. Fair enough! I don't have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it's broadly empirical.
4TheOtherDave11y
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment? For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
0TheOtherDave11y
Thinking about this... while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans. I'm not yet sure what I want to do with that.
0[anonymous]11y
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they've been brought up in the relevant way, they're no less capable of social and sapient behavior. On the other hand, the fish-prosthetic is part of what constitutes the fish's capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities. I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
0TheOtherDave11y
Hm. Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest. And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there's some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there's an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect. So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here. Fair enough.
2Said Achmiz11y
I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part: Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was: 1. People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc. 2. Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus). So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
0MugaSofer11y
Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion. 2 was based on a Bible quote, I think. The state hanged witches.
0Qiaochu_Yuan11y
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury? If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?
3A1987dM11y
Who knows what kind of things a real witch could do to an executioner, for that matter?
2Said Achmiz11y
There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures. That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".) As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.
2Said Achmiz11y
No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
2Raemon11y
There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities. (I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)
0A1987dM11y
Such as human masochists.
0Said Achmiz11y
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty. I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
2TheOtherDave11y
I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/. Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else? I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.
0Said Achmiz11y
That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like: "We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also." or... "We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also." I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)
0TheOtherDave11y
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little. Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am. So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2). I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C. As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2. But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
2Said Achmiz11y
Is brain B1 your brain in this scenario? Or just... some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings' brain states. Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as "pain" and "suffering" (which, for us, might usefully be operationalized as "brain states we prefer not to be in") are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing "pain" and "suffering" (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states... Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity. Or, he could have been making the claim that we can usefully describe the category of "pain" and/or "suffering" in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don't know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad. I don't think that conclusion is justified either... or rather, I don't think it's instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as "suffering" is by definition. And we all know that arguing "by definit
0TheOtherDave11y
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example. My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B's mind antiprefers the experiential correlates of those details. I agree that there's no strict entailment here, though, "merely" evidence. That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
1Lukas_Gloor11y
I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.
0Said Achmiz11y
Ok, that seems workable to a first approximation. So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view): Do you think that animals can "want to get out of the state they're in"?
1Raemon11y
Yes? This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
6Lukas_Gloor11y
On why the suffering of one species would be more important than the suffering of another: Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
1Qiaochu_Yuan11y
I feel psychologically similar to humans of different races and genders but I don't feel psychologically similar to members of most different species. Uh, no. System 1 doesn't know what a species is; that's just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can't, not really. This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property? Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can't Say?
1TheOtherDave11y
I should add to this that even if I endorse what you call "prejudice against prejudice" here -- that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence -- it doesn't follow that because racists or sexists can use a particular argument A as a line of defense, there's therefore something wrong with A. There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
0Lukas_Gloor11y
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don't want to be the sort of person that would have been racist or sexist in previous centuries. If you don't share that premise, there is no way for me to show that you're being inconsistent -- I acknowledge that.
-3Qiaochu_Yuan11y
Wow! So you've solved friendly AI? Eliezer will be happy to hear that.
-2MugaSofer11y
I'm pretty sure Eliezer already knew our brains contained the basis of morality.
2Kaj_Sotala11y
I should probably clarify - when I said that valuing humans over animals strikes me as arbitrary, I'm saying that it's arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that's not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person's belief in that position, regardless of whether that effect is "logical".)
1Qiaochu_Yuan11y
I've been meaning to write a post about how I think it's a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out. (You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.)
7Kaj_Sotala11y
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I'm not sure if it's that problematic as long as you keep in mind that "axioms" is really just shorthand for something like "moral subprograms" or "moral dynamics". I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish - in other words, unless your argument builds on things that the mind's decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind's preferences. I'm not really sure of what you mean here. For one, I didn't say that my moral framework can't distinguish humans and non-humans - I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people's feelings of safety, which would contribute to the creation of much more suffering than killing animals would. Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I'd program into an AI.
1Vladimir_Nesov11y
Well, I don't think I should care what I care about. The important thing is what's right, and my emotions are only relevant to the extent that they communicate facts about what's right. What's right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn't hold too much import, on pain of moral wireheading/acceptance of a fake utility function. (Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that's available in practice, but that doesn't mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
3Kaj_Sotala11y
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and "in deciding what to do, don't pay attention to what you want" isn't very useful advice. (It also makes any kind of instrumental rationality impossible.)
2Vladimir_Nesov11y
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn't mean that you expect them to be accurate, they are just the best you have available in practice. Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
3Osiris11y
I'm not a very well educated person in this field, but if I may: I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They're no more enemies than one's preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human--that is, some things are important only because I'm a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one's mind, even when one KNOWS it is wrong, can be a source of pain, I've found--hypocrisy and indecision are not my friends. Hope I didn't make a mess of things with this comment.
2Kaj_Sotala11y
I'm roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons: 1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction - if someone switches to an exploitation phase "too early", then over time, their values may actually shift over to what the person thought they were. 2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don't match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values. The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn't use terminology like exploration/exploitation that implies that it would be just one of those.
2Vladimir_Nesov11y
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday's puzzle doesn't address the problem of solving yesterday's puzzle. And idealized values don't describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn't change, even if the tendency to be interested in a particular problem does. The problem doesn't get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem. The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any "correction" discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term "values" what I would call intermediate conclusions, but then again I'm interested in you noticing the particular idea that I refer to with this term.) I don't think "unconscious values" is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I'm talking about. This might be true in the sense that humans probably underdetermine the valuation of th
0Kaj_Sotala11y
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
1Vladimir_Nesov11y
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that "implicit idealized value" is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.) If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can't get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
1Kaj_Sotala11y
What does this mean? It sounds like you're talking about some kind of objective morality?
3A1987dM11y
I've interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I'm somewhat confident that there's no big difference in average between the ways they suffer . I'm nowhere near as confident about fish.
8Kaj_Sotala11y
I already addressed that uncertainty in my comment: To elaborate: it's perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says "human suffering is more important" isn't saying that: they're saying that they wouldn't care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It's saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
0shminux11y
Even less so about silverfish, despite its complex mating rituals.

Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you're uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.

I'm not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).

1Qiaochu_Yuan11y
I'm a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept? I don't mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering. I'm not disagreeing that animals suffer. I'm telling you that I don't care whether they suffer.

I'm a human and I care about humans.

You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What's so special about the particular category you picked?

-1Qiaochu_Yuan11y
The psychological unity of humankind. See also this comment.

Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?

Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.

3Qiaochu_Yuan11y
I'm willing to entertain this possibility. I've recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don't care about fish or chickens. I don't think I can have a meaningful relationship with a fish or a chicken even in principle.
1A1987dM11y
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer's, etc.] never mind.
2Pablo11y
:-)
1A1987dM11y
(I could steelman my yesterday self by noticing that even though small children aren't similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
1Nornagest11y
Doesn't follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There's no law of ethics saying that the parameter space has to be small. It's not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn't stand up to such a metric, but -- although I can't speak for Qiaochu -- that's a bullet I'm willing to bite.
0Said Achmiz11y
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person's modus ponens is etc.
8Rob Bensinger11y
Every human I know cares at least somewhat about animal suffering. We don't like seeing chickens endlessly and horrifically tortured -- and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won't disturb our peace of mind. I'll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans. Are you certain you don't care? Are you certain that you won't end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn't care at all about black people (but would regret and abandon this apathy if they knew all the facts)? If you feel there's any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don't care about is a much smaller cost than learning 20 years from now you're the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
3Qiaochu_Yuan11y
I don't feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans. No, or else I wouldn't be asking for arguments. This is a good point.
3Rob Bensinger11y
I don't either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens' psychological alienness to me will seem a difference of degree more than of kind. It's a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant. Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn't be tortured. (And not just because I don't want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don't appear to exhaust it..)
-2Jiro11y
That's not a good point, that's a variety of Pascal's Mugging: you're suggesting that the fact that the possible consequence is large ("I tortured beings" is a really negative thing) means that even fi the chance is small, you should act on that basis.
2BerryPick611y
It's not a variant of Pascal's Mugging, because the chances aren't vanishingly small and the payoff isn't nearly infinite.
5shminux11y
I don't believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid "gateway torture" complications.)
2TheOtherDave11y
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing. So I'm reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don't.
0Kawoomba11y
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between "watch that video, no animal was harmed" versus "watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))", which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn't be. Just checking.)
5Qiaochu_Yuan11y
What?
9Eliezer Yudkowsky11y
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
2TheOtherDave11y
...plus a constant.
-1Kawoomba11y
Reminds me of ... Note the name of the website. She doesn't look happy! "I am altering the deal. Pray I don't alter it any further." Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active ...
7Vaniver11y
"squid" is slang for a GBP, i.e. Pound Sterling, although I'm more used to hearing the similar "quid." One hundred of them can be referred to as a "biscuit," apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a "benjamin." That is, what are TheOtherDave's preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they're paid some cash.
6Paul Crowley11y
"Quid" is slang, "squid" is a commonly used jokey soundalike. There's a joke that ends "here's that sick squid I owe you". EDIT: also, never heard "biscuit" = £100 before; that's a "ton".
0Vaniver11y
Does Cockney rhyming slang not count as slang?
0wedrifid11y
In this case it seems to. It's the first time I recall encountering it but I'm not British and my parsing of unfamiliar and 'rough' accents is such that if I happened to have heard someone say 'squid' I may have parsed it as 'quid', and discarded the 's' as noise from people saying a familiar term in a weird way rather than a different term.
0TheOtherDave11y
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question. Well, to the extent that my noncommital response can be considered an answer to any question at all.
0Qiaochu_Yuan11y
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba's part that serves no real purpose.
5Vaniver11y
Nested parentheses are their own reward, perhaps?
-4Kawoomba11y
In an interesting twist, in many social circles (not here) your use of the word "obfuscation" would be obfuscatin' in itself. To be very clear though: "Eschew obfuscation, espouse elucidation."
0Paul Crowley11y
So to be clear - you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn't be able to tell which was which if you hadn't told me. You don't pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it's fake) or watching the real-harm video and receiving £100. If the reward is £100, I'll take the £100; if it's an actual biscuit, I prefer to watch the fake-harm video.
-1TheOtherDave11y
I'm genuinely unsure, not least because of your perplexing unpacking of "biscuit". Both examples are unpleasant; I don't have a reliable intuition as to which is more so if indeed either is. I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I'm motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I'm again genuinely unsure of. In the real world I usually assume that when I'm not sure it's the latter, but this is such a contrived scenario that I'm not confident of that either. If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn't.
0Qiaochu_Yuan11y
I don't want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don't have moral valence (in another comment I gave the example of seeing corpses get raped). I might also be willing to assign dolphins and monkeys moral value (I haven't made up my mind about this), but not most animals.
0CoffeeStain11y
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it? Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
2A1987dM11y
Two Girls One Cup?
0Qiaochu_Yuan11y
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it's also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
-1Said Achmiz11y
I'll chime in to comment that QiaochuYuan's[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his "human" I would substitute something like "sapient, self-aware beings of approximately human-level intelligence and above" and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses "approximately human" to mean roughly this). So, please reconsider your disbelief. [1] Sorry, the board software is doing weird things when I put in underscores...
2shminux11y
So, presumably you don't keep a pet, and if you did, you would not care for its well-being?
-1Said Achmiz11y
Indeed, I have no pets. If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don't think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
1KatieHartman11y
Do you consider young children and very low-intelligence people to be morally-relevant? (If - in the case of children - you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
2Said Achmiz11y
Good question. Short answer: no. Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns... where exactly? I'm not sure. Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don't know whether such humans exist (most people with Down syndrome don't quite seem to fit that criterion, for instance). There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can't? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that's good or bad is a separate question). So: it's complicated. But to answer practical questions: I don't consider infanticide the moral equivalent of murder (although it's reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics. I hope that answers your question; if not, I'll be happy to elaborate further.
4Eliezer Yudkowsky11y
Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?
4wedrifid11y
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts ("becomes one with the force"). Then the flesh would remain corporeal for consumption. The real ethical test would be would I freeze yoda's head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I'd choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
5CCC11y
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan's footsteps, and has good reason to believe that he will be able to do so.
1Kawoomba11y
Sith philosophy, for reference: Peace is a lie, there is only passion. Through passion, I gain strength. Through strength, I gain power. Through power, I gain victory. Through victory, my chains are broken. The Force shall free me.
8Eliezer Yudkowsky11y
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
0nshepperd11y
If you're lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.
4Qiaochu_Yuan11y
Nope (can't parse them as approximately human without revulsion). Nope (approximately human).
-2Jiro11y
I wouldn't eat flies or squids either. But I know that that's a cultural construct. Let's ask another question: would I care if someone else eats Yoda? Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that's why he ate Yoda, and Yoda's will granted permission for this), then no, I wouldn't care if someone else eats Yoda.
2wedrifid11y
In practice? In common Yoda-eating practice? Something about down to earth 'in practice' empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps "would be, presumably, correlated with". In Yoda's case he could even have just asked for permission from Yoda's force ghost. Jedi add a whole new level of meaning to "Living Will".
-6Jiro11y
7Peter Wildeford11y
I am a moral anti-realist, so I don't think there's any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners. Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn't, or you would think the reaction irrational? I don't know. However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less. ~ One can get both protein and deliciousness from non-meat sources. ~ I'm not sure. I don't think there's a way I could make that transaction work.

Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.

Some interesting things about this example:

  1. Distance seems to have a huge impact when it comes to the bystander effect, and it's not clear that it's irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.

  2. Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).

  3. Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they're allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.

5Qiaochu_Yuan11y
Well, and what would you say to someone who thought that? I don't know. It doesn't feel like I do. You could try to convince me that I do even if you're a moral anti-realist. It's plausible I just haven't spent enough time around animals. Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person's character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it's not because I value corpses. My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow. Really? This can't be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
5Peter Wildeford11y
Right now, I don't know. I feel like it would be playing a losing game. What would you say? I'm not sure how I would do that. Would you kick a puppy? If not, why not? How could I verify that you actually refrain from eating meat?
3Qiaochu_Yuan11y
I would probably say something like "you just haven't spent enough time around them. They're less different from you than you think. Get to know them, and you might come to see them as not much different from the people you're more familiar with." In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I'm not sure how I would go about getting to know a pig. No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result. Oh, that's what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn't convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
1DavidAgain11y
"No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn't chop down a tree either, but it's not because I think trees have moral value, and I don't plan to take any action against the logging industry as a result." All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather a) give up meat for a month b) beat a puppy to death I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn't like to see chickens suffering), but the contrast is quite telling.
2TheOtherDave11y
For what it's worth, I also wouldn't treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don't have to watch. There's quite a contrast there, as well, but it seems to have little to do with suffering. That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered. Probably also to watching it suffer while not being slaughtered. Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
0DavidAgain11y
Yeah, that's the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
4Said Achmiz11y
I don't think that "not enjoying killing a chicken" should be described as an "intuition". Moral intuitions generally take the form of "it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc." What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can't be "true" or "false", they're just facts about your mental makeup. (It may make sense to describe a preference as "invalid" in certain senses, however, but not obviously any senses relevant to this current discussion.) So for instance "I think killing a chicken is morally ok" (a moral intuition) and "I don't like killing chickens" (a preference) do not conflict with each other any more than "I think homosexuality is ok" and "I am heterosexual" conflict with each other, or "Being a plumber is ok (and in fact plumbers are necessary members of society)" and "I don't like looking inside my plumbing". Now, if you wanted to take this discussion to a slightly more subtle level, you might say: "This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?" To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the "agh I don't want to do/watch this!" signal.
2TheOtherDave11y
The dividing lines between the kinds of cognitive states I'm inclined to call "moral intuitions" and the kinds of cognitive states I'm inclined to call "preferences" and the kinds of cognitive states I'm inclined to call "psychic distress" are not nearly as sharp, in my experience, as you seem to imply here. There's a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don't fall crisply into just one category. But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
3TheOtherDave11y
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn't provide in the first place, as well as claims that my intuitions consistently reject. In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting. The result of that process is sometimes distressingly counter-moral-intuitive.
0DavidAgain11y
Sorry, I was unclear: I meant moral (and political) arguments from other people - moral rhetoric if you like - often takes that form.
0TheOtherDave11y
Ah, gotcha. Yeah, that's true.
4Vladimir_Nesov11y
The relevant sense of changing values is change of someone else's purposeful behavior. The philosophical classification of your views doesn't seem like useful evidence about that possibility.
3Peter Wildeford11y
I don't understand what that means for my situation, though. How am I supposed to argue him out of his current values? I mean, it's certainly possible to change someone's values through anti-realist argumentation. My values were changed in that way several times. But I don't know how to do it.
1Vladimir_Nesov11y
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
0Peter Wildeford11y
If moral realism were true, there would be a very obvious path to arguing someone out of their values -- argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism. Moral anti-realism certainly complicates things.
1A1987dM11y
That doesn't necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
1MugaSofer11y
This also applies to foreigners, though.
0A1987dM11y
Well, it also applies to blood relatives, for that matter.
-1Larks11y
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.

If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?

If not, then 'they're human too' must be a stand-in for some other feature that's really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo 'human' to figure out what the actual relevant concept is, since it's not the standard contemporary biological definition.

3CCC11y
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence. One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test. Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.

That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn't seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.

0MugaSofer11y
Clearly, below-human-average intelligence is still worth something ... so is there a cutoff point or what? (I think you're onto something with "intelligence", but since intelligence varies, shouldn't how much we care vary too? Shouldn't there be some sort of sliding scale?)
1CCC11y
That's a very good question. I don't know. Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on). So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence. But, as you point out, below-human intelligence is still worth something. ... I don't think there's really a firm cutoff point, such that one side is "worthless" and the other side is "worthy". It's a bit like a painting. At one time, there's a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there's a painting. But there isn't one particular moment, one particular stroke of the brush, when it goes from "not-a-painting" to "painting". Similarly for intelligence; there isn't any particular moment when it switches automatically from "worthless" to "worthy". If I'm going to eat meat, I have to find the point at which I'm willing to eat it by some other means than administering I.Q. tests (especially as, when I'm in the supermarket deciding whether or not to purchase a steak, it's a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I'm going to continue to use 'species' as my proxy measurement.
-2Larks11y
See Arneson's What, if anything, renders all humans morally Equal? edit: can't get the syntax to work, but here's the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf
-3[anonymous]11y
So what do you think of 'sapient' as a taboo for 'human'? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I'm willing to bite the bullet on that so long as we're willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant's claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
3Rob Bensinger11y
How confident are you that beings capable of immense suffering, but who haven't learned any language, all have absolutely no moral significance? That we could (as long as it didn't damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)? I don't see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
-2[anonymous]11y
I'm not committed to this, or anything close. What I'm committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn't intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that. So to answer your question: I donno, I didn't claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
1elharo11y
"Sapience" is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans. Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect? I don't think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
0[anonymous]11y
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I'm happy to call those animals sapient. What's clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard. No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we're language-users. Chimps aren't.
5elharo11y
Sorry, still not crisp. If you're using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It's a matter of degree. Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you've predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
2[anonymous]11y
Not a synonym. Language use is a necessary condition. And by 'language use' I don't mean 'ability to communicate'. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We've trained animals to do some pretty amazing things, but I don't think any, or at least not more than a couple, are really language users. I'm happy to recognize the moral worth of any there are, and I'm happy to recognize a gradient of worth on the basis of a gradient of sapience. I don't think anything we've encountered comes close to human beings on such a gradient, but that might just be my ignorance talking. It's not veiled! I think humans are privileged, special, better, more significant, etc. And I'm not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
-1MugaSofer11y
Are you seriously suggesting that the difference between someone you can understand and someone you can't matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
0[anonymous]11y
Yes, I'm suggesting both, on a certain reading of 'can' and 'unable'. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one). If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
1Rob Bensinger11y
The goal of defining 'human' (and/or 'sapient') here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If "language use and sensation" end up only being necessary or sufficient for concepts of 'human' that aren't plausible candidates for the original 'non-humans aren't moral patients' claim, then they aren't relevant. The goal here isn't to come up with the one true definition of 'human', just to find one that helps with the immediate task of cashing out anthropocentric ethical systems. Well, you'd be at a loss because you either wouldn't exist or wouldn't be able to linguistically express anything. But we can still adopt an outsider's perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
0[anonymous]11y
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it's perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn't begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering. Another way to put this is that I'm defending, or trying to steel-man, the claim that the fact that a human's suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal's suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the 'infinite torture' objection doesn't necessarily land. We can discuss that world from this one.
1Rob Bensinger11y
You seem to be using 'anthropocentric' to mean 'humans are the ultimate arbiters or sources of morality'. I'm using 'anthropocentric' instead to mean 'only human experiences matter'. Then by definition it doesn't matter whether non-humans are tortured, except insofar as this also diminishes humans' welfare. This is the definition that seems relevant Qiaochu's statement, "I am still not convinced that I should care about animal suffering." The question isn't why we should care; it's whether we should care at all. I don't think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu's question. No, the latter was an afterthought. The discussion begins here.
0[anonymous]11y
Ah, okay, to be clear, I'm not defending this view. I think it's a strawman. I didn't refer to psychological reasons. An example besides Kant's (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that's just an example of such a reason. I took the discussion to begin from Peter's response to that comment, since that comment didn't contain an argument, while Peter's did. It would be weird for me to respond to Qiaochu's request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental. But this is getting to be a discussion about our discussion. I'm not tapping out, quite, but I would like us to move on to the actual conversation.
0Rob Bensinger11y
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There's no rule against agreeing with an OP.
0[anonymous]11y
Fair point, though we might be reading Qiaochu differently. I took him to be saying "I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so." I suppose you took him to be saying something more like "I don't think there are any reasons to take animal suffering as morally significant." I don't have good reasons to think my reading is better. I wouldn't want to try and defend Qiaochu's view if the second reading represents it.
-3Eugine_Nier11y
If that was the case there would be no one to do the discussing.
1[anonymous]11y
Well, we could discuss that world from this one.
-3Eugine_Nier11y
Yes, and we could, for example, assign that world no moral significance relative to our world.
4Vaniver11y
I found it interesting to compare "this is the price at which we could buy animals not existing" to the "this is the price people are willing to pay for animals to exist so they can eat them," because it looks like the second is larger, often by orders of magnitude. (This shouldn't be that surprising for persuasion; if you can get other people to spend their own resources, your costs are much lower.) It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be 'factory farmed' in the same way. [Edit: It appears that conditions for fish on fish farms are actually pretty bad, to the point that many species of fish cannot survive modern farming techniques. So, no comment on the relative badness.]
4Peter Wildeford11y
From what I know, fish farming doesn't sound pleasant, though perhaps it's not nearly as bad as chicken farming.
0Douglas_Knight11y
If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you're pretty ignorant of factory farming. Maybe you haven't seen enough propaganda? Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there? Indeed, your post claimed to be about days of life. Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it's just that the propaganda works on them.
4Peter Wildeford11y
I never said they were in the same ballpark. Just that fish farming is also something I don't like. ~ Yes, I do. ~ I agree that might not make much sense for fish, except in so far as farming causes more fish to be birthed than otherwise would. ~ I think this is a bias that is present in any kind of person that cares about advocating for or against a cause.
3Peter Wildeford11y
Here's a gruesome video on the whole fish thing if you're in to gruesome videos.
2Desrtopa11y
Well, they can move more, but on the other hand they tend to pollute each others' environment in a way that terrestrial farmed animals do not, meaning that not all commercially fished species can survive being farmed with modern techniques, and those which can are not necessarily safe for humans to eat in the same quantities.
3A1987dM11y
There are decent arguments (e.g. this) for eating less meat even if you don't care about non-human animals as a terminal value.
2Pablo11y
You may want to take a look at this brief list of relevant writings I compiled in response to a comment by SaidAchmiz.
2selylindi11y
YMMV, but the argument that did it for me was Mylan Engel, Jr's argument, as summarized and nicely presented here. On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.
2CCC11y
Looking over that argument, in the second link, I notice that those same premises would appear to support the conclusion that the most morally correct action possible would be to find some way to sterilize every vertabrate (possibly through some sort of genetically engineered virus). If there is no next generation - of anything, from horses to cows to tigers to humans to chickens - then there will be no pain and suffering experienced by that next generation. The same premises would also appear to support the conclusion that, having sterilised every vertabrate on the planet, the next thing to do is to find some painless way of killing every vertebrate on the planet, lest they suffer a moment of unnecessary pain or suffering. I find both of these potential conclusions repugnant; I recognise this as a mental safety net, warning me that I will likely regret actions taken in support of these conclusions in the long term.
1Qiaochu_Yuan11y
This is an argument for vegetarianism, not for caring about animal suffering: many parts of this argument have nothing to do with animal suffering but are arguments that humans would be better off if we ate less meat, which I'm also willing to entertain (since I do care about human suffering), but I was really asking about animal suffering. $18 a year is way too low.
0selylindi11y
I'm not offering a higher price since it seems cost ineffective compared to other opportunities, but I'm curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)
1Qiaochu_Yuan11y
In the neighborhood of $1,000.
-1Eugine_Nier11y
I'm less willing to entertain said arguments seeing as how they come from people who are likely to have their bottom lines already written.
0Said Achmiz11y
I started reading the argument (in your second link), racked up a full hand of premises I disagreed with or found to be incoherent or terribly ill-defined before getting to so much as #10, and stopped reading. Then I decided that no, I really should examine any argument that convinced an intelligent opponent, and read through the whole thing (though I only skimmed the objections, as they are laughably weak compared to the real ones). Turns out my first reaction was right: this is a silly argument. Engel lists a number of premises, most of which I disagree with, launches into a tangent about environmental impact, and then considers objections that read like the halfhearted flailings of someone who's already accepted his ironclad reasoning. As for this: It makes me want to post the "WAT" duck in response. Like, is he serious? Or is this actually a case of carefully executed trolling? I begin to suspect the latter... Edit: Oh, and as Qiaochu_Yuan says, the argument assumes that we care about animal suffering, and so does not satisfy the request in the grandparent.
0selylindi11y
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It's that robustness of the argument, drawing more on many weak points than one strong one, that convinced me. I don't understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn't simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that's a prime example of a moment to notice the fact that you're confused. It's possible you were reading him as saying something he wasn't saying. Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn't make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain
1Said Achmiz11y
That's possible, but I don't think that's the case. But let me address the argument in a bit more detail and perhaps we'll see if I am indeed misunderstanding something. First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It's not like they're independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument. And I'm not sure what sort of logic you're using wherein you believe p1 with low probability, p2 with low probability, p3 ... etc., and their disjunction ends up being true. (Really, that wasn't sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just "probably wrong" or anything so prosaic. The quoted paragraph: You're right, I guess I have no idea what he's saying here, because this seems to me blatantly absurd on its face. If you're interested in truth, of course you're going to reject those beliefs most likely to be false. That's exactly what you're going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth. ??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don't think any of your beliefs are false, or else you'd reject them. If you find that some of your beliefs are false, you will want to reject them, because if you're interested in truth then you want to hold zero false beliefs. I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about
0selylindi11y
(Hi, sorry for the delayed response. I've been gone.) Just the standard stuff you'd get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you're moderately disposed to reject every statement, you're weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox. You're right, of course, that Engel's premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B. OK, yes, you've expressed yourself well and it's clear that you're intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing: "As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)" If you're interested in reconsidering Engel's argument given his intended interpretation of it, I'd like to hear your updated reasons for/against it.
0Said Achmiz11y
Welcome back. Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe "We ought to take steps to make the world a better place" with P = 0.3? Like, maybe we should and maybe we shouldn't? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet? In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude... something. What, exactly? It's not clear). And yet, I can't help but notice that Engel takes an approach that's not exactly either of the above. He says: I don't know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don't rightly know what it would mean for an argument to work like that. (In other words, my response to the Engel quote above is: "Uh, really? Why...?") As for your restatement of Engel's argument... First of all, I've reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he's saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true. But, ok. Taking your formulation fo
0selylindi11y
I'd be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals. As far as I'm aware, that's exactly how logical arguments work, formally. See the second paragraph here. Meat tastes good and is a great source of calories and nutrients. That's powerful motivation for bodies like us. But you can strike that word if you prefer. We aren't. We're requiring only and exactly that it not be singled out for immunity to consistency-checking. That's it! That's exactly the structure of Engel's argument, and what he was trying to get people to do. :)
4Said Achmiz11y
That is well and good, except that "making the world a better place" seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. "Whether a proposition would follow from a moral theory" is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory? Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-'o-premise, and see how much premise you end up with; if it's a lot of premise, the conclusion magically appears. The claim that it doesn't even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction. Alright then. To the object level! Let's see... Depends on how "pain" and "suffering" are defined. If you define "suffering" to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and "pain" likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in "suffering", then first of all, I disagree with your use of the word "suffering" to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation. See (p1). If by "cruelty" you mean ... etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope. Depends on the steps. If by this you mean "any steps", then no. If by this you mean "this is a worthy goal, and we should find appropriate steps to achieve and take said steps", then sure. We'll count this one as a "yes"
-1shminux11y
My usual reply to a claim that a philosophical statement is "proven formally" is to ask for a computer program calculating the conclusion from the premises, in the claimant's language of choice, be is C or Coq.
0Said Achmiz11y
Oh, really? ;) string calculate_the_conclusion(string the_premises[]) { return "The conclusion. Q.E.D."; } This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?
0shminux11y
Yes, it explicates the lack of logic, which is the whole point.
2Said Achmiz11y
I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?
2shminux11y
Yes I was. My point was that if one writes a program that purports to prove that then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
2Said Achmiz11y
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the "arguments" most in need of such treatment would be highly unlikely to receive it. "Argument by handwaving" or "argument by intimidation" is all too common among professional philosophers. The worst part is how awkward it feels to challenge such faux-arguments. "Uh... this... what does this... say? This... doesn't say anything. This... this is actually just a bunch of nonsense. And the parts that aren't nonsense are just... just false. Is this... is this really supposed to be the argument?"
0shminux11y
Hence my insistence on writing it up in a way a computer would understand.
0fractalman11y
That doesn't even pass a quick inspection test for"can do something different when handed different parameters" . The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader's premise){result++;} return result. } -note the "at least".
-6selylindi11y
1Raemon11y
I don't think there's a subthread here about posthumans here yet, which surprises me. Most of the other points I'd think to make have been made by others. Several times you specify that you care about humanity, because you are able to have relationships with humans. A few questions: 1) SaidAchmiz, whose views seem similar to yours, specified they hadn't owned pets. Have you owned pets? While this may vary from person to person, it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale). I've also recently made a friend with two pet turtles. One of the turtles seems pretty bland and unresponsive, but the other seems incredibly interested in interaction. I expect that some amount of the perceived relationship between my friend and their turtle is human projection, but I've still updated quite a bit on the relative potential-sentience of turtles. (Though my friend's veterinarian did said the turtle is an outlier in terms of how much personality a turtle expresses) 2) You've noted that you don't care about babyeaters. Do you care about potential posthumans who share all values you currently have, but have new values you don't care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can't understand? Do you expect those humans to care about you? I'm not sure how good an argument it is that "we should care about things dumber than us because we'd want smarter things to care about us", in the context of aliens who might not share our values at all. But it seems at least a little relevant, when specifically concerning the possibility of trans-or-posthumans. 3) To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because t
2Qiaochu_Yuan11y
I had fish once, but no complicated pets. People are also able to form relationships of this kind with, say, ELIZA or virtual pets in video games or waifus. This is an argument in favor of morally valuing animals, but I think it's a weak one without more detail about the nature of these relationships and how closely they approximate full human relationships. Depends. If they can understand me well enough to have a relationship with me analogous to the relationship an adult human might have with a small child, then sure. I hid a lot of complexity in "in principle." This objection also applies to humans who are in comas, for example, but a person being in a coma or not sharing my interests is a contingent fact, and I don't think contingent facts should affect what beings have moral worth. I can imagine possible worlds reasonably close to the actual one in which a person isn't in a coma or does share my interests, but I can't imagine possible worlds reasonably close to the actual one in which a fish is complicated enough for me to have a meaningful relationship with.

An important question is whether there is a net loss or gain of sentient life by avoiding eating meat. Or, if there is a substitution between different sentient life-forms, is there a net gain to quality of life?

  1. Do we know where the biomass that currently goes into farmed animals would end up if we stopped using farmed animals? Would it go into humans, or into vehicles (biofuels) or into wildlife via land taken out of agricultural production?

  2. Should we assume that farmed animals have a negative quality of life (so that in utilitarian terms, the world wo

... (read more)
8Raemon11y
I object to this as the general metric for "should a life be brought into existence?" (I'm something approximating an average utilitarian. To the extent that I'm a total utilitarian, I think Eliezer's post about Lives Worth Celebrating is relevant) Also, less controversial, I'd like to note that factory-farmed animals really don't have much opportunity to end their own lives even if they wanted to.

For that matter, even if they did have the opportunity, livestock species may not have the abstract reasoning abilities to recognize that suicide is even a possible thing.

Pigs might have the intelligence for that, but for cows and chickens, I doubt it. It's not like suicide is an evolutionarily favorable adaptation, it's a product of abstract reasoning about death that most animals are not likely to be be capable of.

3Lukas_Gloor11y
Good points, but I suspect they are dominated by another part of the calculation: In the future, with advanced technology, we might be able to seed live on other planets or even simulate ecosystems. By getting people now to care about suffering in nonhumans, we make it more likely that future generations care for them as well. And antispeciesism also seems closely related to anti-substratism (e.g. caring about the simulation of humans, even though they're not carbon-based). If you are the sort of person that cares about all sorts of suffering, raising antispeciesist awareness might be very positive for far future-related reasons, regardless of whether the direct (short-term) impact is actually positive, neutral, or even slightly negative.
3drnickbone11y
The other long-term consideration is that whatever we do to animals, AIs may well do to us. We don't want future AIs raising us in cramped cages, purely for their own amusement, on the grounds that their utility is much more important than ours. But we also don't want them to exterminate us on "compassionate" grounds. (Those poor humans, why let them suffer so? Let's replace them by a few more happy, wire-heading AIs like us!)
1Lukas_Gloor11y
Don't many/most people here want there to be posthumans, which may well cross the species-barrier? I don't think there is an "essence of humanity" that carries over from humans to posthumans by virtue of descendance, so that case seems somewhat analogous to the wireheading AIs case already. And whether the AI would do wireheading or keep intact a preference architecture depends on what we/it values. If we do value complex preferences, and if we want to have many beings in the world that have them mostly fulfilled, I'd assume there would be more awesome or more effective ways of design than current humans However, if this view implies that killing is bad because it violates preferences, then replacement would, to some extent, be a bad thing and the AI might not do it.
0Jiro11y
That argument would seem to apply to plants or even to non-intelligent machines as well as to animals, unless you include a missing premise stating that AI/human interaction is similar to human/animal interaction in a way that 1) human/plant or human/washing machine interaction is not, and 2) is relevant. Any such missing premise would basically be an entire argument for vegetarianism already--the "in comparison to AIs" part of the argument is an insubstantial gloss on it. Furthermore, why would you expect what we do to constrain what AIs do anyway? I'd sooner expect that AIs would do things to us based on their own reasons regardless of what we do to other targets.
0freeze9y
Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant. More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).
0Jiro9y
"Other animals" is a gerrymandered reference class. Why would the AI specifically care about how we treat "other animals", as opposed to "other biological entities", "other multicellular beings", or "other beings who can do mathematics"?
0freeze9y
Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren't in general.
0Jiro9y
That's the kind of thing I was objecting to. "'Other animals' are capable of feeling pain" is an independent argument for vegetarianism. Adding the AI to the argument doesn't really get you anything, since the AI shouldn't care about it unless it was useful as an argument for vegetarianism without the AI. It's also still a gerrymandered reference class. "The AI cares about how we treat other beings that feel pain" is just as arbitrary as "the AI cares about how we treat 'other animals'"--by explaining the latter in terms of the former, you're just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn't the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1/3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?
0Lumifer9y
Heh. Have you been nice to your smartphone today? Treat your laptop with sufficient respect? DID YOU EVER LET YOUR TAMAGOTCHI DIE?
-1freeze9y
Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.
0Jiro9y
The question is really "why does the AI have that exact limit". Phrased in terms of classes, it's "why does the AI have that specific class"; having another class that includes it doesn't count, since it doesn't have the same limit.
0freeze9y
After significant reflection what I'm trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans). Furthermore, there are a lot of edge cases of humanity where people can't learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn't matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
0Jiro9y
But the original argument is that we shouldn't eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can't be specified or controlled in detail, so we have to worry how the AI would treat us. If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn't show a real problem--if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn't use that as a basis to mistreat us.
0freeze8y
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods. Though I've spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I'm probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.
0Lumifer9y
Go start recruiting Jains as AI researchers... X-/
0freeze9y
I don't see why. Jainism is far from the only philosophy associated with veganism.
0Lumifer9y
Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don't see why are you so focused on the diet.
0freeze9y
Vegans as a general category don't unnecessarily harm and certainly don't eat insects either. I'm not just focused on the diet actually. Come to think of it, what are we even arguing about at this point? I didn't understand your emoticon there and got thrown off by it.
0Lumifer9y
I'm yet to meet a first-world vegan who would look benevolently at a mosquito sucking blood out of her. I don't think we're arguing at all. That, of course, doesn't mean that we agree. The emoticon hinted that I wasn't entirely serious.
0MugaSofer11y
This rather assumes we're striving for as many lives as possible, does it not? I mean, that's a defensible position, but I don't think it should be assumed.
-1seanwelsh7711y
A difficulty of utilitarianism is the question of felicific exchange rates. If you cast morality as a utility function then you are obliged to come up with answers to bizarre hypothetical questions like how many ice-creams is the life of your first born worth because you have defined the right in terms of maximized utility. If you cast morality as a dispute avoidance mechanism between social agents possessed with power and desire then you are less likely to end up in this kind of dead-end but the price of this casting is the recognition that different agents will have different values and that objectivity of morals is not always possible.
0drnickbone11y
Agreed, but the OP was talking about "effective altruism" , rather than about "effective morality" in general. It's difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of "effective" altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods. Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the "good things" that a system of moral rules needs to achieve.
-5seanwelsh7711y

Hang on, aren't you valuing the non-existence of an animal as 0 and the existence of a farm animal as some negative number per unit time?

Doesn't that imply that someone who kills farm animals, or prevents their existence in the first place is an altruist?

And what about wild animals, which presumably suffer more than farm animals? Should an altruist try to destroy them too?

Is your ideal final society just humans, plants and pets? I'd be quite unhappy in such a world, I imagine, so do I get it in the neck too?

-1Peter Wildeford11y
Yes. ~ Only if they kill the farm animals painlessly and only if there aren't any other problems. For example, I don't think the strategy of bomb factory farms or sneak in and kill all their livestock will be net positive strategies. However, if a factory farm owner were to shut down the farm and order a painless slaughter of all the animals, that would be good. ~ Yes. I suspect vegetarians make an impact by doing that. ~ At this moment, it seems unclear. Wild animals are definitely a problem. I don't think they suffer more than farm animals, but they might. I'm not sure what the best intervention strategy is, but it's clear that some kind of strategy is needed, both in the short-run and long-run. ~ Not necessarily. ~ Of course not.

At this moment, it seems unclear. Wild animals are definitely a problem. I don't think they suffer more than farm animals, but they might. I'm not sure what the best intervention strategy is, but it's clear that some kind of strategy is needed, both in the short-run and long-run.

I've heard a considerable number of people on this site echo the position that wild animals suffer so much their existence must be a net negative. This strikes me as awfully unlikely; they live in the situations they're adapted to, and have the hedonic treadmill principle going for them as well. You can observe at a zoo how many animals can become neurotic when they're removed from the sorts of circumstances they're accustomed to in the wild, but all their physical needs are accounted for.

Animals are adapted to be reproductively successful in their environments, not to be maximally happy, but considering the effects constant stress can have on the fitness of animals as well as humans, it would be quite maladaptive for them to be unhappy nearly all the time.

3Jabberslythe11y
For animals that are R-selected or, in other words, having many offspring in the hopes that some will survive, the vast majority of the offspring die very quickly. Most species of Fish, Amphibians and many less complex animals do this. 99.9% of them dieing in before reaching adulthood might be a good approximation for some species. A painful death doesn't seem worth a brief life as a wild animal. It's true that most people wouldn't be functioning optimally if they were not somewhat happy and extrapolating this to other animals who seem to be similar to us in basic emotion, I would agree that an adult wild animal seem like they would live an alright life.
4Desrtopa11y
Juvenile r-type species tend to have so little neurological development, I think their capacity for experience is probably pretty minimal in any case.
-1Peter Wildeford11y
I tend to agree. But there's also an awful lot of predation, disease, and starvation in wild habitats. I recommend reading Brian Tomasik's "The Importance of Wild-Animal Suffering". Whether the sum of all of this adds up to net negative lives is something I'm unsure about.
3johnlawrenceaspden11y
Crikey, full marks for honesty! I've never seen the position put quite so starkly before. It sounds a bit like 'the crime is life, the sentence is death'. I don't see why you wouldn't want me dead, since I'd loathe a world without the wild, and would probably be unhappy. Certainly I would die to prevent it if I could see a way to. In fact I think I'd sacrifice my own life to save a single (likeable) mammal species if I could. But that's probably too much an emotional response to discuss rationally. And what about the vegan argument that you could feed four times as many people if we were all vegans? Would you consider a world of 28 billion people living on rice an improvement? When you say 'Not necessarily', should I take that to mean 'just humans and plants, actually', or 'just humans and yeast', or have I taken that the wrong way? If we could wirehead the farm animals, would you become an enthusiastic meat-eater?
1Peter Wildeford11y
That's a very misleading way of putting it. The situation is one of dire, unending, inescapable torture for all of life. How would death, or better yet nonexistence, not be preferable? ~ I'd speculate you wouldn't actually be suicidal in a world without the wild. Furthermore, I certainly wouldn't want you killed just because you're unhappy, because that's reversible. And even if it weren't, I think a policy of killing people for being unhappy would have tremendously bad short-run and long-run consequences. Also, I don't think elimination of the wild is the only option. Mass welfare plans are potentially feasible. We could eliminate the wild and replicate it with holograms or robots that don't feel pain. Forcing animals to suffer just so you can have a beautiful wild doesn't sound moral to me. And it's possible that a number of species actually live net positive lives already. Lastly, none of my outside-the-mainstream positions on wildlife need distract from the very real problem of factory farming. I think that case should be dealt with first. ~ Why? If you care about their existence, why don't you also care about their welfare? ~ I'm unsure (no position one way or the other yet) on the accuracy of that argument. ~ It depends on a lot of other factors. More people living good lives seems like an improvement to me, all else being equal. I think it would be worth giving up richness and variety in food in order to facilitate this, though obviously that one aspect would be regrettable. Why do you ask? What are you getting at? ~ You've taken it the wrong way. You asked if my "ideal final society" includes "just humans, plants and pets". I think there's a strong possibility it can include more than that (i.e. wild animals, robots, etc.). My ideal final society would be some sort of transhumanist utopia, I think. ~ I'm currently unsure because I don't understand accurately the nature of wireheading. But if one could hypothetically remove all suffering from
7johnlawrenceaspden11y
Are you sure about this? The lives of our medieval ancestors seem unedurably horrifying to me, and yet many of those people exhibited strong desires to live. All wild animals exhibit strong desires to live. Why not take them at their word?
4johnlawrenceaspden11y
I think I care about both, but don't ask me where my desires come from. Some weird evolution-thing combined with all the experiences of my life and some randomness, most prob'ly.
3johnlawrenceaspden11y
I could not agree more! But it does sound like we have very different ideas about what 'dealing with it' means. I'd like all farms to be like the farm I grew up next to. I was much more of an animal lover as a child than I am now, but even then I thought that the animals next door seemed happy. Ironically I used to worry about the morality of killing them for food, but it never occurred to me that their lives were so bad that they should be killed and then not eaten.
0Peter Wildeford11y
I mean, I'd be fine with that. Rather, instead of not being killed for food, they shouldn't be tortured for food either.
2CCC11y
If a non-human animal is unhappy, you would prefer it to be painlessly killed. If a human is unhappy, you would prefer it not to be painlessly killed. Am I mis-stating something here? If not, could you please explain the difference? As I understand the concept, it involves connecting a wire to the animal's brain in such a way that it always experiences euphoric pleasure (and presumably disconnecting the parts of the brain that experience suffering).
5Peter Wildeford11y
Humans (and potentially some nonhumans like dolphins and apes) are special in that they have forward-looking desires, including an enduring desire to not die. I don't want to trample on these desires, so I'd only want the human killed with their consent (though some exceptions might apply). Nonhuman animals without these forward-looking desires aren't harmed by death, and thus I'm fine with them being killed, provided it realizes a net benefit. (And making a meal more delicious is not a net benefit.)
2johnlawrenceaspden11y
why not? (blah,blah, googolplex of spectacular meals vs death of tb bacillus, blah)
1johnlawrenceaspden11y
This is interesting. Even though I usually love life minute to minute, and think I am one of the happiest people I know, I don't have a strong desire to be alive in a year's time, or even tomorrow morning. And yet I constantly act to prevent my death and I fully intend to be frozen, 'just in case'. This seems completely incoherent to me, and I notice that I am confused. Wild animals go to some lengths to prolong their lives. Whether they are mistaken about the value of their lives or not, what is the difference between them and me? P.S. I'm not winding you up here. In the context of a discussion about cryonics, ciphergoth found the above literally unbelievable and recommend I seek medical help! After that I introspected a lot. After a year or so of reflection, I'm as sure as I can be that it's true.
2Morendil11y
If you did have such a desire, how do you suppose it might manifest?
2johnlawrenceaspden11y
Very similarly to my actual behaviour of course. As I say, I notice that I am confused. But if you're saying that my behaviour implies that I feel the desire that I don't perceive feeling, then surely we can apply the same reasoning to animals. They clearly want to continue their own lives.
0[anonymous]11y
Okay, well, what would such a strong desire feel like, do you think? I take it you say you have an absence of such a desire because something is lacking where you expect it should be if you had the desire. What is that?
2johnlawrenceaspden11y
Yes, I feel I know what it is to want something. I'm very good at wanting e.g. alcohol, cigarettes, food, intellectual satisfaction, and glory on the cricket field. And I don't feel that sort of desire towards 'future existence'. I mean, I think that if I was told that I had a terminal cancer tomorrow, that I'd just calmly start making preparations for a cryonics-friendly suicide, and not worry about it too much. Even though I think that the chances of cryonics actually working are minute. Whereas I'm pretty sure that if I get out for a duck in tomorrow's cricket match, that I'll feel utterly wretched for at least half an hour, even though it won't matter in the slightest in the grander scheme of things. And yet, were someone to offer me the choice of 'duck or death', of course I'd take the duck. It's really weird. I feel like I somehow fail to identify with my possible future selves over more than about a week or so. I've tried most vices and not worried about the consequences much. And yet I never did do myself serious harm, and a few years ago I stopped riding motorcycles because I got scared. It's as though someone who is not me is taking a lot of my decisions for me, and he's more cautious and more long-termist than me.
4Morendil11y
It sounds as if you use the words "desire" in two different senses - concrete, gut-level craving on the one hand, vs abstract, making-plans recognition of long-term value on the other hand. That doesn't sound so unusual - I don't, for instance, feel a burning desire to be alive tomorrow - most of the time. I'm pretty sure that if someone had a gun on me and demanded I hand over my last jar of fig jam, that desire would suddenly develop. But in general, I'm confident anyway that I'll still be here tomorrow. Hypothesis: desire is usually abstract, in particular when the object of desire is a given, but becomes a feeling when that object is denied or about to be denied. (I'm rather doubtful that most animals experience "desires" that conform to this dynamic.)
1[anonymous]11y
Well, it makes sense to me that future time can't really be an object of desire all on its lonesome. People have spent time trying to work out what is being feared when we fear death, or what is being desired when we desire to live longer. A very common strategy is to say that what we fear is the loss of future goods, or the cancelation of present projects, and what we desire are future goods or the completion of present projects. So in a sense, I think I'm right there with you in wanting (in some kind of preference ordering way) to live longer, but without having any real phenomenal desire to live longer.
0CCC11y
Ah, thank you. That explains it quite neatly. I imagine that, ideally, there would be some sort of behavioural test for such forward-looking desires that could be administered; otherwise, I'm not sure that they could be reasonably claimed to be absent.
1johnlawrenceaspden11y
I'm trying to see where your morality is coming from. It looks like 'assign a real value to every (multicellular) living creature according to how much fun it's having, add all the values up, and bigger is better'. Whereas I greatly prefer 'A few people living in luxury in a beautiful vast wilderness' to 'Countless millions living on rice in a world where everything you see is a human creation'. I don't have a theory to explain why. I just do. I'm sure that that's my evolved animal nature speaking about 'where is the best place to set up home'. And probably I'm dutchbookable, and maybe by your lights I'm evil. But it seems odd to try to come up with new desires according to a theory. I'd rather go with the desires I've already got.
-1Peter Wildeford11y
That sounds about right. Obviously, so long as we have different terminal values, our conclusions will be different.
-4johnlawrenceaspden11y
All suffering? Even, say, the chance of the farmer getting a torn nail? Why such high standards in this case?
0Peter Wildeford11y
The more suffering that could be removed, the better, but eventually you'll hit a point where removing more suffering is no longer feasible or worth focusing on, because there will be suffering easier to remove elsewhere. Really, what I'm looking for, is the point where the net suffering to produce the food is equal to or less than the net benefit the production of the food provides.
1johnlawrenceaspden11y
Voting up, by the way. very thought-provoking. I have clever vegan friends I must discuss this with.

As far as improving the world through behavioral changes go, advertising e-cigarettes is probably much more cost effective than advertising vegetarianism. You could even target it to smokers (either through statistics and social information, or just be grabbing low-income people in general and restaurant, fast food, and retail workers in particular).

-2Peter Wildeford11y
Not that I necessarily doubt you, but what makes you think that?
4ThrustVectoring11y
What hurts smokers isn't nicotine exactly, it's all the other stuff that gets into their lungs when they burn tobacco. A big part of why quitting smoking is hard is because nicotine helps form habits - specifically, the habit of getting out a cigarette, lighting it, and inhaling. E-cigarettes push the same habit buttons as tobacco cigarettes, so its much easier for smokers to go tobacco-free and vastly improve their health and quality of life by switching over to inhaling the vapors of mixes of nicotine, propylene glycol, and flavorings.
2RyanCarey11y
And neither that I doubt you, but what makes you think it's cost-effective?
0ThrustVectoring11y
Ah, misunderstood your question. Its more on the benefit side of things - the effectiveness of ads is within an order of magnitude, but you get human QALYS instead of preventing cruelty to chickens.
-2Peter Wildeford11y
What RyanCarey said. I understand the principle behind E-cigarettes and support them, but I'm not yet convinced that advocating for them would produce more net welfare improvement per dollar than advocating for people to eat less meat.
4ThrustVectoring11y
It depends on the relative effectiveness of ads and the coonversion ratio you're willing to accept between human and animal suffering. So my statement can be reduced more to 'I don't think chicken suffering is important' I don't think that some animals are capable of suffering, but can't think of how to make my point without talking about animal suffering. I mean, how many rocks would you be willig to break for a QALY? Thats about how many chickens I would be willing to kill.
-2Raemon11y
I mean... that's a theoretically coherent statement, but isolating "e-cigarettes" as a thing to talk about instead of just saying "I don't value chickens" seems odd. What is it about humans you value? Do you value humans with extreme retardation, or a hypothetical inability to form relationships?
-4Jabberslythe11y
Most people believe that chickens suffer. They seem have all the right parts of the brain and the indicative behaviors and everything. What's your theory that says that humans do but chickens don't?
2[anonymous]11y
Thrust said he didn't care about chickens suffering, not that they don't. One question that doesn't seem to get asked in these discussions is, if chickens have this certain mental machinery doing certain things when I hurt them, why should I care, given that I don't already? Is there a sequence of value comparisons showing that such a non-preference is incoherent? Or a moral argument that I am not considering? If not, I'd rather just follow my actual preferences.
0Jabberslythe11y
Thrustvectoring said: From what Thrust has said, I think it's ambiguous between whether he cares he thinks animals can't suffer and doesn't care about them for that reason or he just doesn't care about animal suffering as you describe. Or , more likely, he is in some middle state. As to your second point, yes that's the approach. And it seems largely that is what is happening when it comes up in the discussion here.
0ThrustVectoring11y
It's kind of both. If a chicken is in pain, that doesn't bother me that much. Also, I don't think that chickens have the mental apparatus necessary to suffer like people can suffer.
0ThrustVectoring11y
People tend to read a lot more into behavior than is really there. I mean, ants run away when you slam your fist down on the counter next to them, and it sure looks like they're scared, but that's more a statement about your mind than the ants'. I mean, chickens are largely still functional without a head. Yes, there's something going on in a chicken's brain. There isn't anything worth celebrating going on in there, though.
6KatieHartman11y
For the record, the chicken that survived had retained most of the brainstem. He was able to walk ("clumsily') and attempted some reflexive behaviors, but he was hardly "functional" to anyone who knows enough about chickens to assume that they do more than walk and occasionally lunge at the ground. The chicken's ability to survive with only the brain stem isn't shocking. Anencephalic babies can sometimes breathe, eat, cry, and reflexively "respond" to external stimuli. One survived for two and a half years. This was a rare case, but so was the chicken - there were other attempts to keep decapitated chickens alive, and none have been successful. This isn't to say that we don't have a tendency to anthropomorphize animals or treat reflexive behaviors as meaningful - we do. But pointing that out isn't where the conversation ends. Chickens are an easy target because common knowledge dictates that they're stupid animals, because most people haven't spent any substantial amount of time with them and assume there isn't anything particularly interesting about their behavior, and because we have a vested interest in believing that there's nothing of value going on in their brains.
-6Peter Wildeford11y

It would have been better, I think, to submit an argument for veganism (or vegetarianism) for scrutiny here first. Then an argument about the best way to promote it. As it stands, the two issues are confused.

My own view is that for me, the productivity hit and adverse health impact outweigh the benefits. (vegan diet contributed to the loss of sight in my left eye among other things).

If we stop eating meat, these animals will not thereafter frolic gaily in the meadow. They will not exist at all. The merits of veganism make for a big enough topic on their o... (read more)

My personal reason for pursuing vegetarianism (and ultimately veganism) is simple: I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world. If I eat meat for my whole life, I'll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that. Each day of my life, I'll be racking up more "suffering debt" to pay off, and I'd rather not have my mere existence contribute to adding more suffering.

6Kawoomba11y
That's probably the abridged version, because if that were the actual goal, a doomsday machine would do the trick.
0A1987dM11y
If you count pleasure as negative suffering...
0Kaj_Sotala11y
Yes.
0Kawoomba11y
Do you have a fleshed-out version formulated somewhere? *tries to hide iron fireplace poker behind his back*
1Kaj_Sotala11y
No. The "fleshed-out version" is rather complex, incomplete, and constantly-changing, as it's effectively the current compromise that's been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don't have a good answer to the doomsday machine, because I currently don't expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven't needed to resolve that particular inconsistency. Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth's population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an "everyone dies" scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an "everyone dies" scenario. I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an "every
5Vladimir_Nesov11y
This seems like an arbitrary distinction. The value relevant to your ongoing decisions is in opportunity cost of the decisions (and you know that). Why take the popular sentiment seriously, or even merely indulge yourself in it, when it's known to be wrong?
3Kaj_Sotala11y
It is indeed wrong, but it seems to mostly produce the same recommendations as framing the issue in terms of opportunity costs while being more motivating. "Shifting to vegetarianism has a high expected suffering reduction" doesn't compel action in nearly the same way as "I'm currently racking up a suffering debt every day of my life" does.
3MTGandP11y
Actually, it's pretty easy: just donate enough money to organizations like Vegan Outreach such that you're confident that you have caused the creation of a new vegetarian/vegan.
8Peter Wildeford11y
Perhaps I'm a bad advocate, but I don't think there is an "argument" for veganism/vegetarianism, outside what you would see in the pamphlets, videos, or "Why Eat Less Meat?" linked within. I suppose I could upload my "Why Eat Less Meat" piece? Another problem I'm having is that there are like sixty million objections that someone might raise against veganism/vegetarianism, and it would be impossible to answer them all. ~ I'm not going to be a lecturer on vegan health or say you "did it wrong", but the eye thing definitely strikes me as an atypical result. I'm doing a vegetarian diet right now with no health or productivity demerits. ~ Of that, I'm obviously aware. I count that as suffering reduced. ~ It's potentially a priority issue if it can be accomplished so cheaply; hence the cost-effectiveness estimate. I wasn't even here to argue that veganism was a global priority. Right now, I think at best it would be in the "top five". Even if this essay were read as an advocacy piece instead of an evaluation piece, it's advocating for philanthropy toward vegetarianism rather than vegetarianism itself.
1Said Achmiz11y
I have to agree with waveman that we should establish that vegetarianism is a worthwhile cause before we devote LW posts to figuring out how best to promote it. We could, in theory, investigate how best to promote all sorts of things, but let's not actually advocate promoting arbitrary values or ideologies that may or may not be good ideas. Doing so seems like a straightforward way of wasting our time and doing actual harm (by, among other things, creating the impression that the cause in question has been accepted by the LW community as being worthwhile). (i.e. "What is the best way to get out the word about cheese-only diets?" implicates that we've already determined cheese-only diets to be not only a good idea, but worth actively advocating.) It seems nonsensical to view advocacy for philanthropy toward vegetarianism as different from advocacy for vegetarianism itself, if you take the view (as you seem to do) that vegetarianism is a moral issue.
6Peter Wildeford11y
I don't know how to establish it as a worthwhile cause to those who don't already value nonhuman animals, so I skipped that step. For those who do already value nonhuman animals, though, I had hoped this essay was such an evaluation, given that it is a cost-effectiveness estimate and evidence survey. It's not a comparison of advocacy efforts, since no other advocacy efforts are considered. - That's true. I suppose one could consider advocating vegetarianism without personally becoming vegetarian, though that would be somewhat hypocritical.
2Said Achmiz11y
I do sympathize with the difficulty of persuading someone with whom you do not share the relevant values, but I'm afraid I can't help but object to "this part of the argument is hard, so I skipped it". Changing values is not impossible. I don't think valuing nonhuman animals is a terminal value; the terminal value in question probably looks something more like "valuing the experiences of minds that are capable of conscious suffering" or something to that general effect. (That is, if we insist on tracing this preference to a value per se, rather than assuming that it's just signaling or somesuch.) And most people here do, I think, place at least some importance on reflective equilibrium, which is a force for value change. The problem I have with your approach (and I hope you'll forgive me for this continued criticism of what is, to be truthful, a fairly interesting post) is that it's a nigh-fully-general justification for advocating arbitrary things, like so: "Here is an analysis of how to most cost-effectively promote the eating of babies. I don't know how to establish baby-eating as a worthwhile cause for people who don't already think that eating babies is a good idea, so I skipped that step." Ditto " ... saving cute kittens from rare diseases ...", ditto " ... reducing the incidence of premarital sex ...", ditto pretty much anything ever. What I would be curious to see is whether the LW populace perhaps already thinks that vegetarianism is a settled question. If so, my objections might be misplaced. Was this covered in one of the surveys? Hmm... Edit: Aha.
8davidpearce11y
SaidAchmiz, I wonder if a more revealing question would be to ask if / when in vitro meat products of equivalent taste and price hit the market, will you switch? Lesswrong readers tend not to be technophobes, so I assume the majority(?) of lesswrongers who are not already vegetarian will make the transition. However, you say above that you are "not interested in reducing the suffering of animals". Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?
-1Said Achmiz11y
In (current) practice those are the same, as you realize, I'm sure. My attitude is closest to something like "no amount of animal suffering adds up to any amount of human suffering", or more generally "no amount of utility to animals [to the extent that the concept of utility to a non-sapient being is coherent] adds up to any amount of utility to humans". However, note that I am skeptical of the concept of consistent aggregation of utility across individuals in general (and thus of utilitarian ethical theories, though I endorse consequentialism), so adjust your appraisal of my views accordingly. In vitro meat products could change that; that is, the existence of in vitro meat would make the two views you listed meaningfully different in practice, as you suggest. If in vitro meat cost no more than regular meat, and tasted no worse, and had no worse health consequences, and in general if there was no downside for me to switch... ... well, in that case, I would switch, with the caveat that "switch" is not exactly the right term; I simply would not care whether the meat I bought were IV or non, making my purchasing decisions based on price, taste, and all those other mundane factors by means of which people typically make their food purchasing decisions. I guess that's a longwinded way of saying that no, I wouldn't switch exclusively to IV meat if doing so cost me anything.

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.

If your reason for vegetarianism is mainly prevention of animal suffering, shouldn't you be concentrating on ethical farming? Or are you against raising a happy cow and painlessly killing it some time later?

If you value the welfare of nonhuman animals from a consequentialist perspective

if you value happy animals, than you ought to value happy farm ani... (read more)

9Peter Wildeford11y
I don't think so. I wouldn't be against happy cows with painless deaths, but I think achieving that outcome, especially via the advocacy available to me, is very unlikely. I don't understand. This assumes there are happy farm animals. If any farm animals are happy, they're certainly in the extreme minority.
-2Said Achmiz11y
It's not clear to me that there are happy animals at all, for some species. Are there happy chickens? Happy cows? Where? (Can chickens or cows even be "happy" in the sense we understand happiness?) Or is the conclusion that since the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable?
5Peter Wildeford11y
I'm unsure if there are happy animals at all. Wild animal suffering also sounds pretty bad. But, at least for factory farmed animals, I agree that "the existence of these animals can only result in suffering, the outcome where farms animals stop existing is desirable".
2Said Achmiz11y
Yeah, wild animal suffering is the other thing I was thinking about. Anyway, that conclusion sounds pretty reasonable (given caring about animal suffering in the first place)... except that it seems to lead to wanting the entire animal kingdom to stop existing (or most of it, anyway). I'm not sure that's a reductio ad absurdum, or if it is, what it's a reductio of, exactly (caring about animal suffering? caring about suffering in general? utilitarianism?!), but it should at least give us pause. I don't think this is a bullet I would bite. For what it's worth, given that I do care about humans, and given that some humans seem to be very bothered by the suffering of animals, I would certainly value the reduction of animal suffering for the purpose of making people feel better — although I don't care about this enough to willingly incur significant personal or societal costs in the bargain. So, for instance, if in vitro meat became available, it tasted the same, cost no more (or only a little more), and made a lot of people feel better, that would, for me, be an important thing to consider. But I think I value the existence of animal species, and ecologies, for their own sake. I'm not sure how to describe this; scientific curiosity? Valuing biological diversity? In any case, I think that, all else being equal, the extinction of entire kinds of creatures would be a sad outcome. (Although I can see a logical-extreme sort of counterargument: what if we create a new species explicitly for the purposes of easy torturability, and then torture them? They've been created from whole cloth simply to give us something to inflict pain on! Should we mourn their extinction? These hypothetical victimcows might be compared to actual cows in relevant ways. Of course, this argument does not work in the case of wild animal species.)
1Peter Wildeford11y
I'm not sure that has to be the case. One could aim to provide adequate welfare for the entire animal kingdom, though that would require significant resources. Similarly, I think some human lives aren't worth living, but I don't think the proper response is genocide.
0Said Achmiz11y
You said: I was merely extrapolating. Or do you think there are relevant differences between wild animals and domesticated ones, such that we could provide welfare, as it were, for wild animals (without them having to hunt/kill anything, I surmise is the implication), but not for domesticated ones? I mean, both of those scenarios are light-years away from feasibility, so I can only assume we're talking about some in-principle difference. Are we?
0Peter Wildeford11y
I think there is a fundamental difference in wild animals and factory farmed animals -- if factory farming were to stop, there would no longer be any factory farmed animals. They are created specifically for that purpose. One can't provide welfare for factory farmed animals without stopping factory farming, and then there wouldn't be any factory farmed animals. Though, I suppose, one could raise animals in ideal welfare conditions and then painlessly kill them for food. I would be fine with that.
0Said Achmiz11y
There's something strange with your terms there... are you using "factory farmed" as a descriptor of... kinds (species, etc.) of animals? Or animals that happen to exist in conditions of factory farming? I am confused.
2Peter Wildeford11y
Factory farmed animals are animals that happen to exist in conditions of factory farming. And "factory farming" is meant to convey not just mass production, but also the present quality of farming with regard to animal welfare.
-3Douglas_Knight11y
Do you see a difference between factory farming and other farming? This comment seems to say that you don't. The original post, by bothering to mention factory farming asserts that you do. But the rest of the post does not seem to reflect any conclusions drawn from such a belief. If you are a consequentialist, not a deontologist and if non-factory animals suffer less than factory animals, you should take that into account, even if you believe that their lives are net negatives. But I think you should introspect about whether you really are a consequentialist.
3Peter Wildeford11y
Sort of. Different farms treat animals differently, and there are certainly some farms that treat animals well. But they're all small, local farms and not a source of the majority of the meat. Perhaps you're suggesting that instead of pro-vegetarianism advocacy, we do pro-"farms that treat animals well" advocacy. The problem is, I suspect, it would take an awful, awful lot of money to first scale a farm large enough to get meat to everyone while still treating all the animals well. Can you explain how it's not currently being taken into account and what effect you think it would have on the calculation? And why it might indicate some sort of hidden deontology on my part?
3Douglas_Knight11y
You seem driven by thresholds, like a good life and especially a good death and you do not seem interested in replacing a life of high suffering with a life of low suffering, just because the life of low suffering is a net negative. Such thresholds tend to be characteristic of deontologists. In particular, I observed this on the thread about fish. Here I asked you about replacing worse farms with better but still bad farms and your response was that truly good farms are too expensive, ignoring the possibility of farms that are full of suffering, just lower levels of suffering. Maybe it is implausible to change how farming is done (though I think you are mistaken about the diversity of practices), but getting people to switch from pork to beef or from chicken to fish seems quite plausible to me.
1Peter Wildeford11y
What makes me look like I'm interested in thresholds? Replacing a life of high suffering with a life of low suffering is good. Replacing that same life of high suffering with a life of no suffering is even better. ~ I don't understand how I ignored your point. Could you re-explain? ~ I've strongly considered convincing people to shift away from chicken, eggs, and fish to other forms of meat, given arguments around suffering per kg of meat demanded. This is also why I'm personally a vegetarian and not a vegan.
8Kaj_Sotala11y
In principle, it might be better to support companies making ethical meat than to entirely boycott meat. In practice, companies lie about their practices all the time, and things that are marketed as something often turn out to be something else entirely. At least for me personally, becoming certain enough about the ethicalness of a meat product that I'd feel confident about buying it would require far more time and energy than just achieving the certainty by avoiding meat overall.
8Watercressed11y
It's not really fair to call a range of .02 to 65.92 four digit precision just because the upper bound was written with four digits.

I have no argument with your desire to establish the most cost-effective way to get the most bang for your bucks. I simply do not accept the premise that it is wrong to eat meat.

Consider the life of a steer in Cape York. It is born the property of a grazier. It is given health care of a sort (dips, jabs, anti-tick treatment). It lives a free life grazing for a few hundred days in fenced enclosures protected by the grazier's guns from predators. Towards the end, it is mustered by jackaroos and jillaroos, shipped in a truck to the lush volcanic grasslands o... (read more)

4Said Achmiz11y
Without engaging with any of your other points, I'd just like to point out that the OP considers the good outcome to be one where farm animals don't exist at all, rather than one where they're free in the wild. (Because if animals don't exist then they can't suffer.)
2seanwelsh7711y
Quite so. The OP I think is more concerned about factory farming than the more traditional grazing approaches to cattle. But I think if you push a morality too far up against the hill of human desire it will collapse. Many activists overestimate the "care factor". My ability to care is pretty limited. I can't and won't care about 7 billion other humans on this planet except in the thinnest and most meaningless senses (i.e. stated preferences in surveys which are near worthless) let along the x billion animals. In terms of revealed preferences (where I put my dollars and power) I favour the near and the dear over the stranger and the genetically unrelated.
1Richard_Kennaway11y
Ex-ter-min-ate! Ex-ter-min-ate!! EX-TER-MIN-ATE!!! That explains the Daleks. They're failed FAIs that were built to eliminate suffering from the universe.
0elharo11y
Fanboy mode on: The Daleks are well established as natural, non-human, sentient biological organisms inside armor. Details have varied over the years, but I don't think they've ever qualified as AIs.
0wedrifid11y
They have always been biological but they are also typically genetically engineered at a rather fundamental level to produce desired psychological traits. While I would not use "AIs" myself in such circumstances I see some merit in differentiating between the bioloical vs electronic distinction and the natural vs artificial intelligence distinction.