Thanks for the post. I agree that this is an important question. I do, however, have many disagreements.
1 seems like an irrelevant objection. Animal welfare interventions are marketed as Effective Altruist or utilitarian interventions because of the amount of suffering we can avert by improving conditions in factory farms or reducing the amount of food produced that way. This doesn't imply that other people don't have other reasons to care about animals. The OP's argument is that the specifically utilitarian, aggregativist argument for animal welfare interventions favors wireheading chickens over other interventions pushed more frequently.
Effective altruism does not imply utilitarianism. Utilitarianism (on most definitions) does not imply hedonism. I would guess less than 10% of EAs (or of animal-focused EAs) would consider themselves thoroughgoing hedonists, of the kind that would endorse e.g. injecting a substance that would numb humans to physical pain or amputating human body parts, if this reduced suffering even a little bit. So on the contrary, I think the objection is relevant.
There can be amounts of things other than suffering, though. Caring about the "number of chickens that lead meaningful lives" doesn't mean that one isn't a utilitarian. (For the record, I agree with the OP that the notion of "leading meaningful lives" isn't so important for animals, but I think it's possible to disagree with this and still be advocating an EA intervention.)
Re point 2: I agree that vat-grown meat would eventually be a viable approach, and the least controversial one. I am less optimistic about the timeframe, the taste. the acceptance rate and the costs.
Re point 1: See my reply to Ozy and others.
Re point 3: While I disagree with your assessment of "major flaws" (talk about being dismissive!), I accept the critique of the tone of the post sounding dismissive. If I were writing a formal report or a presentation at an EA event, I would take a lot more time and a lot more care to sound appropriately professional. I will endeavor to spend more time on polishing the presentation next time I write a controversial LW post.
2 objects to the OP using an example from science fiction, and immediately goes on to propose a science fiction intervention.
Cultured meat doesn't seem like a 'science fiction intervention' to me. It's true that it has appeared in several works of science fiction, but it is also being actively developed by several labs and companies, with prototypes having already been made - for more detail on both halves of this sentence, see the Wikipedia page.
Regardless, it seems really weird to me that being in a science fiction novel is a critique of a new idea. If I think about intellectual work happening in the past 30 years that has the potential to be the most important work, I think about superintelligence, uploads, nanotech, the fermi paradox, making humanity interplanetary, and a bunch of other ideas who could have happily been critiqued as 'from science fiction' when first examined.
Could we grow animals that desire to be eaten, or perhaps don't feel pain? I recall many years ago in Richard Dawkins' The Greatest Show On Earth about wild wolves that became incredibly tame and underwent massive morphological changes in just a few generations of pure artificial selection. I'm not sure what traits you'd select on for animals in factory farms, but it's an interesting idea.
(Edit: In case it's not clear, I'm responding to the top-level comment's initial criticism, not DanieFilan's point. I probably should've just replied directly, it was just after reading Daniel's comment that I thought up my comment.)
The following question seems interesting: Of the technological advances that have made a substantial difference to the world since the time when science fiction first emerged as a genre, what fraction (weighted by impact, if you like) appeared in science fiction before they became fact, and how closely did the reality resemble the fiction?
Of course, it's also important to consider the fraction of technologies introduced in science fiction that then came into existence.
Yes. That might actually be a better question -- except that the actually-relevant population is presumably something like "technologies introduced in science fiction that seemed like they might actually be possible in the not-outrageously-far future".
I'm aware that there are labs that claim to have produced prototype lab meat, at enormous expense. This is some evidence that lab meat is feasible at scale, but mass-produced lab meat is still something in the future, not the present, and therefore to some extent inherently speculative.
Cold fusion enjoys similar status, as did until recently the EmDrive. Some such ventures work out; others don't.
I would agree that cultured meat is "to some extent inherently speculative". What I'm reacting to is your assertion that it's science fiction in the same way that the Ameglian Major Cow is science fiction. I think that it is both significantly less speculative than the prospect of making an Ameglian Major Cow, and also not "science fiction" as most people would understand the term.
I agree that there's a difference in degree. But, the difference between a more and less highly speculative technology is not really the distinction "literally science fiction" implies, and it's important to call out things like that even if there is some other, more valid argument the person could or should have made. I agree that some of the examples, especially the Ameglian Major Cow, were much more speculative than lab-grown meat. On the other hand, administering opioids to factory farmed animals may be substantially less speculative.
First, I agree that administering opioids to farmed animals is less speculative than cheap mass-produced cultured meat, i.e. lab meat, but I don't think that that's relevant to the conversation, since it wasn't what tommsittler was referring to by "literally science fiction".
I think you're saying something like <<Because lab meat doesn't yet exist, it's highly speculative technology, and therefore you shouldn't distinguish it from the Ameglian Major Cow by calling the Amegilan Major Cow "literally science fiction", even though the Ameglian Major Cow is much more speculative than lab meat -- if the Ameglian Major Cow is "science fiction", then so is lab meat, which is why it makes sense to say "2 objects to the OP using an example from science fiction, and immediately goes on to propose a science fiction intervention">>. I'm not sure this is right, so please correct me if it's wrong.
My response is that the degree in how speculative the technologies are is in fact relevant: there exists a prototype for one and not the other, it's easier to see how you would make one than the other, and one seems to be higher esteemed than domain experts (this last factor maybe isn't crucial, but does seem relevant especially for those of us like me who don't have domain expertise), and these differences make one a relevantly safer bet than the other. These differences are evidenced by the fact that one has mostly been developed in a soft science fiction series, and one is the subject of active research and development. As such, it makes sense to call the Ameglian Major Cow "literally science fiction" and it does not make sense to call lab meat "science fiction".
I've seen this discussed before by Rob Wiblin and Lewis Bollard on the 80,000 Hours podcast (edit: tomsittler actually beat me to the punch in mentioning this).
Robert Wiblin: Could we take that even further and ultimately make animals that have just amazing lives that are just constantly ecstatic like they’re on heroin or some other drug that makes people feel very good all the time whenever they are in the farm and they say, “Well, the problem has basically been solved because the animals are living great lives”?
Lewis Bollard: Yeah, so I think this is a really interesting ethical question for people about whether that would, in people’s minds, solve the problem. I think from a pure utilitarian perspective it would. A lot of people would fine that kind of perverse having, for instance, particularly I think if you’re talking about animals that might psychologically feel good even in terrible conditions. I think the reason why it’s probably going to remain a thought experiment, though, is that it ultimately relies on the chicken genetics companies and the chicken producers to be on board...
I encourage anyone interested to listen to this part of the podcast or read it in the transcript, but it seems clear to me right now that it will be far easier to develop clean meat which is widely adopted than to create wireheaded chickens whose meat is widely adopted.
In particular, I think that implementing these strategies from the OP will be at least as difficult as creating clean meat:
I think that getting these strategies widely adopted is at least as difficult as getting enough welfare improvements widely adopted to make non-wireheaded chicken lives net-positive
I think that breeding for smaller brains is not worthwhile because smaller brain size does not guarentee reduced suffering capacity and getting it widely adopted by chicken breeders is not obviously easier than getting many welfare improvements widely adopted.
I'm not as confident that injecting chickens with opioids would be a bad strategy, but getting this widely adopted by chicken farms is not obviously easier to me than getting many other welfare improvements widely adopted. I would be curious to see the details of the study romeostevensit mentioned, but my intuition is that outrage at that practice would far exceed outrage at current factory farm practices because of "unnaturalness", which would make adoption difficult even if the cost of opioids is low.
IIRC an analysis was done of the cost to administer opioids to livestock at scale and it winds up at pennies a pound. The only reason we don't is negative consumer perception (appreciable quantities do not wind up in the consumed meat), similar to irradiation vs additives for food preservation. Animal charities have been reluctant to pursue further research because of fear of pushing a narrative that makes it okay/gives people an ethical out since opioids don't actually eliminate all the suffering just alleviate some fraction. There is similar contention around 'improved' living standard for the livestock.
I think you have failed to address the issue of why these solutions are acceptable for chickens and not for humans. The obvious explanation for why people disagree with you on this point is not that they don't care about animal suffering, any more than people who don't want to amputate the non-essential body parts that might give humans discomfort later in life don't care about human suffering. It is that they think those actions are unethical for animals, just like they are for humans.
This seems like an irrelevant objection, given that the OP is explicitly arguing about a conditional (IF mundane improvements in factory farming is a good intervention point for aggregate welfare reasons, THEN wireheading chickens is an even better intervention on those grounds), not unconditionally favoring the latter policy over the former.
For EA to make any sense at all as a way of organizing to do good, it needs to be able to clearly distinguish a rank-ordering of interventions on the basis of merit in a strictly utilitarian or other aggregative analysis with some particular defined outcome, from the question of which interventions have additional sources of support such as other moral considerations.
It also needs to be possible to have a discussion of whether a position is coherent separately from the question of whether it's the position we in fact hold, if that position is a claimed justification for demanding resources.
It is that they think those actions are unethical for animals, just like they are for humans.
And this is precisely my point. We optimize a human proxy, not actual suffering.
That's not a proxy for suffering; it is caring about more than just suffering. You might oppose making animals' brains smaller because it also reduces their ability to feel pleasure, and you value pleasure in addition to pain. You might oppose amputating non-essential body parts because that reduces the animal's capacity for pleasurable experiences of the sort the species tends to experience. You might oppose breeding animals that enjoy pain because of the predictable injuries and shorter lifespan that would result: physical health and fitness is conventionally included in many definitions of animal welfare. You might also be a deontologist who is opposed to certain interventions as a violation of the animal's rights or dignity.
Not being a negative utilitarian is not a bias.
That's not a proxy for suffering; it is caring about more than just suffering
Yes, I agree with all that! I am not advocating that one approach is right and all the others are wrong. I have no prescriptive intentions about animals. I am advocating being honest with oneself about your preferences. If you proclaim to care about the reduction of animal suffering yet really care about many other metrics just as much, spend time reflecting on what your real values are, instead of doing motte-and-bailey when pressed. (This is a generic "you", not you personally.)
It seems like you are the one doing some kind of motte-and-bailey, given you made a post called "Wirehead your Chickens" arguing for wireheading chickens and having a rather dismissive tone towards the opposing side, and now you're saying the real point was that negative utilitarian rhetoric is too emphasized compared to the moral systems which are actually used by EAs. (By the way, the prominence of negative utilitarian rhetoric is one of My Issues With EA Let Me Show You Them.)
Sorry about the miscommunication. Disengaging, since I do not find focusing on form over substance all that productive. I have accepted your criticism about the tone as valid.
I'm surprised that you're mentioning only non-negative utilitarianism and deontology, rather than the capability utilitarianism you recently signal-boosted, which I think is a more psychologically realistic explanation of people's reactions to the idea of wireheading.
But you make it sound as though these people are objectively “wrong”, as if they're *trying* to actually reduce animal suffering in the absolute but end up working on the human proxy because of a bias. That may be true of some, but surely not all. What ozymandias was, I believe, trying to express, is that some of the people who'd reject your solutions consciously find them ethically unacceptable, not merely recoil from them because they'd *instinctively* be against their being used on humans.
Clearly I have not phrased it well in my post. See my reply to ozy. I am advocating self-honesty about your values, not a particular action.
I think you raised a very important question and i very much agree that one should be honest with oneself what one truly cares about.
When it comes to the interventions you proposed i am nor really sure about the practicality. (2) sounds doable but i'd guess that the side effects of losing the ability to strong pain are severe and would lead to self-hurting behaviour and maybe increased fighting among the animals. But if it was possible to find a drug that could be administered to animals to reduce their suffering (maybe just in certain situations) without major side-effects, that could in fact be an effective intervention and may be worth looking into, mainly for the reason that it maybe wouldn't come with big costs to the corporations doing the farming. It may, however, help to sustain factory farming past the point it could be abolished otherwise, which would probably cause more net suffering.
I don't know how much time breeding animals that are radically different from ours takes and I'm generally a bit more sceptical whether it's worth persuing that.
In general the main problem with this way of fighting animal suffering is that most people concerened about animals wouldn't support it and they probably also would have no problem admitting that they care about more than just reducing suffering. I think that it's probably better to persue strategies for animal suffering reduction that most people in the movement could get behind.
So i think their could be some value of researching this approach but I am sceptical overall.
Yeah, most of my suggestions were semi-intentionally outside the Overton window, and the reaction to them is appropriately emotional. A more logical approach from an animal welfare proponent would be something along the lines of "People have researched various non-mainstream ideas before and found them all suboptimal, see this link ..." or "This is an interesting approach that has not been investigated much, I see a number of obvious problems with it, but it's worth investigating further." etc.
On the one hand, "it's probably better to pursue strategies for animal suffering reduction that most people in the movement could get behind" is a very reasonable view. On the other hand, a big part of EA is looking into unconventional ways to do good, and focusing on what's acceptable for the mainstream right off the bat does not match that.
I think a huge problem is that we don't have a good metric for suffering of arbitrary mammals. If you want to breed chickens that enjoy pain you need a way to measure enjoy that doesn't Goodhard in ways that negate the project.
Some things we do know, such as how animals, including humans, feel when administered various types of painkillers. There is no speculation about it. But it is of course more tempting to focus on rejecting harder-to-implement suggestions if the intention is to reject the whole approach.
Most non-rationalists think that whether doing Y on target X is good depends on whether X would prefer Y in a base state where X is unaltered by Y and is aware of the possibility of Y, even if having Y would change his perception or is completely concealed from his perception.
If you're going to create animals who want to be eaten (or who enjoy actions that would otherwise cause suffering), you need to assess whether this is good or bad based on whether a base state animal with unaltered desires would want to be eaten or would want to be subject to those actions. If you're going to amputate animals' body parts, you need to consider whether a base state animal with those parts would want them amputated.
The proposals above all fail this standard.
It is not clear that there is any such base state: what would it mean for an animal to "be aware of the possibility" that it could be made to have a smaller brain, have part of its brain removed, or modified so that it enjoys pain? Maybe you have more of a case with amputation and the desire to be eaten, since the animal can at least understand amputation and understand what it means to be eaten (though maybe not what it would mean to not be afraid of being eaten). But "The proposals above all fail this standard" seems to be overgeneralizing.
There are two related but separate ideas. One is that if you want to find out if someone is harmed by X, you need to consider whether they would prefer X in a base state, even if X affects their preferences. Another is that if you want to find out if someone is harmed by X, you need to consider what they would prefer if they knew about and understood X, even if they don't.
Modifying an animal to have a smaller brain falls in the second category; pretty much any being who can understand the concept would consider it harmful to be modified to have a smaller brain, so it should also be considered harmful for beings who don't understand the concept. It may also fall in the first category if you try to argue "their reduced brain capacity will prevent them from knowing what they're missing by having reduced brain capacity". Modifying it so that it enjoys pain falls in the second category for the modification, and the first category for considering whether the pain is harmful.
I guess it just seems to me that it's meaningless to talk about what someone would prefer if they knew about/understood X, given that they are incapable of such knowledge/understanding. You can talk about what a human in similar circumstances would think, but projecting this onto the animal seems like anthropomorphizing to me.
You do have a good point that physiological damage should probably still be considered harmful to an animal even if it doesn't cause pain, since the pre-modified animal can understand the concept of such damage and would prefer to avoid it. However, this just means that giving the animal a painkiller doesn't solve the problem completely, not that it doesn't do something valuable.
The prevalence of irrelevant objections in the comments here seems like substantial evidence that animal welfare is often being advocated for as an EA cause in ways that diverge substantially from advocates' true reasoning.
I think that extrapolating this post is part of what makes lab-grown meat so appealing. Can't have animal suffering in the quest for animal meat if there is no animal in the middle. I'd wager we're closer to that than to convincing vast swaths of people to stop anthropomorphizing what animals would want for themselves!
I don't have an opinion on this, but I want to voice my approval for bringing up something this controversial without sugarcoating your points. And I want to point out that the title doesn't really reveal the subject, I'd have read it sooner if I had known it was about animal suffering.
I agree with the point that we should be investing more into research on direct reduction of suffering(as a phenomenon that happens in brains), rather than reducing the proxies for it.
This is true for humans as well as for animals: e.g. investing into discovering direct stimulation/surgery approaches to reducing or even turning off pain (or just the painfullness of pain, see pain asymbolia) might have greater impact on life satisfaction than it's opportunity cost for say cancer research.
I am not at all knowledgeble on the subject (and would love to be corrected), but I suspect that ever since lobotomy was declared unethical no interventions for pain other than chemical ones have been seriously investigated.
I think even if we believe that plant-based and clean meat as well as change in attitudes can get us to a world free of, at least, factory farming, it may be worth looking into the strategies as plans for what we might call worst case scenarios, like it it turns out that clean meat will remain too expensive, plant-based alternatives fail to catch on and a signicicant part of the population fails to be convinced by the ethical arguments.
I also think that those ideas may be more important in countries that are only just building factory farms compared to western countries.
If you're interested in this idea, you may want to join the "Reducing pain in farm animals" Facebook group. (It's currently very small.)
If you are an effective altruist who is concerned with farm animal welfare, what is stopping you from working on finding ways to apply what works but is not ethical for humans to what works and reduces actual suffering in animals?
The fact that, as you said yourself, these solutions aren't ethically acceptable.
TL;DR: If you care about farm animal welfare, work on minimizing actual animal suffering, not a human proxy for animal suffering.
Epistemic status: had a chat about this with a couple of local EA enthusiasts who attended the EA Global 2018 in San-Francisco, and apparently this was not high on the agenda. I have only done a cursory search online about this, and nothing of note came up.
When you read about farm animal welfare, what generally comes up is vegetarianism/veganism, humane treatment of farm animals, and sometimes vat-grown meat. This emphasis is quite understandable emotionally. Cows, pigs, chickens in industrial farms are in visible severe discomfort most of their lives, which are eventually cut short long before the end of their natural lifespan, often in painful and gruesome ways.
An animal welfare activist would ask themselves a question like "what is it like to be a chicken in a chicken farm?" and end up horrified. Their obvious solutions are those outlined above: have fewer farm animals and treat them "humanely." Less conventional approaches that reduce animal suffering get an immediate instinctive pushback, because we would not find them acceptable for ourselves. This is what I call the human proxy for animal suffering. Maybe there is a more standard name for this kind of anthropomorphizing? Anyway, let's list a few obvious approaches:
Many of these are probably way easier and more practical than shaming people into giving up tasty steak. But our morality immediately fights back, at least for most of us. "What do you mean, cut off baby chicken's legs so it does not have leg pain later? You, monster!"
Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable. And so it appears that there is virtually no research or funding into the real suffering reduction, even though we know those will work. Because they work on humans already. Drug addicts are quite happy while under influence. Epidural works wonders for temporary pain removal, and so does spinal cord injury in many cases. The list of proven but not ethically acceptable ways to reduce suffering in humans is pretty long.
If you are an effective altruist who is concerned with farm animal welfare, what is stopping you from working on finding ways to apply what works but is not ethical for humans to what works and reduces actual suffering in animals?