Eliezer_Yudkowsky comments on Effective Altruism Through Advertising Vegetarianism? - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (551)
Since all of my work output goes to effective altruism, I can't afford any optimization of my meals that isn't about health x productivity. This does sometimes make me feel worried about what happens if the ethical hidden variables turn out unfavorably. Assuming I go on eating one meat meal per day, how much vegetarian advocacy would I have to buy in order to offset all of my annual meat consumption? If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.
That's not exactly true, since advocating vegetarianism has more effects than simply reducing the consumption of meat. For one thing, it alters how people think about and live their lives. If that $30 of spending produces a certain amount of human suffering (say, from self-induced guilt over eating meat), then your ethicalness isn't as high as calculated.
Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?
I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.
I have to disagree on two points:
I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd.
More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.
Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.
As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.
I think we should be wary of reasoning that takes the form: "There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x."
Certainly we should, but that was not my reasoning. What I said was:
I object to treating an issue as settled and uncontroversial when it's not. And the implication was that if this issue is not settled here, then it's likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.
"I will act as if this is a settled issue" in such a case is an attempt to take an epistemic shortcut. You're skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don't resort to such tricks here.
If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it's not as if this isn't a receptive audience. (Or, if there are such posts and I've just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?
I certainly wouldn't, and here's why.
Mentioning "non-human animals" in the same sentence and context along with humans and AIs, and "other intelligences" (implying that non-human animals may be usefully referred to as "intelligences", i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don't impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there's clearly a claim hidden in that statement, and I'd like to see it made quite explicit, at least, even if you think it's not worth arguing for.
That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)
Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it's wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan--a leading ethicist and political philosopher--mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).
I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.
Frankly, I'm baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There's plenty of good material out there which I'm happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
Citation needed. :)
It's interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the "playing God" and "against Nature" objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem's Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.)
But one thing he doesn't do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it's an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes "animal suffering is bad" as an axiom. Well, fair enough, but if I don't share that axiom, you wouldn't expect me to be convinced by his arguments, yes?
I don't think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like... "intellectuals/educated people/technophiles/etc. in general", and then even more broadly than that, "people in general". However, this is a peripheral issue, so I'm ok with dropping it.
In case it wasn't clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I'd appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such that we, a community with a high level of discourse, may discuss, debate, and examine it.
"Public declarations would only be signaling, having little to do with maximizing good outcomes."
On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.
"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."
a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
Where is the best content on this topic, in your opinion?
Eh? Unpack this, please.
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
Allegedly, vegetarian diets are supposed to be healthier, but I don't know if that's true. I also don't know how much of a productivity drain, if any, a vegetarian diet would be. I've personally noticed no difference.
~
It depends on what the cost-effectiveness ends up looking like, but $30 sounds fine to me. Additionally or alternatively, you could eat larger animals instead of smaller animals (i.e. more beef and less chicken) so as to do less harm with each meal.
Wouldn't that $30 come from your work output that is currently going to effective altruism?
Arguably worth it for $30 of reduced guilt, bragging rights and twisted, warped enjoyment of ethical weirdness.
Using the worst estimate, that would mean that it's arguable that a 1 in 50 chance of killing a child under 5 is worth that much reduced guilt, bragging rights, and twisted, warped enjoyment of ethical weirdness.
I'd call you a monster, but I'd totally take actions which fail to prevent the death of an entire kid I'd never meet anyway if I could do so without suffering any risk of being blamed and could get a warped enjoyment of ethical weirdness.
We monsters.
If the ethical hidden variables turn out unfavorably, you have more to make up for than that. HPJEV thinking animals are not sentient has probably lost the world more than one vegetarian-lifetime.
This seems unlikely to be a significant fraction of my impact upon the summum bonum, for good or ill.
I'm actually fairly concerned about the possibility of you influencing the beliefs of AI researchers, in particular.
I'm not sure if it ends up mattering for FAI, if executed as currently outlined. My understanding is that the point is that it'll be able to predict the collective moral values of humanity-over-time (or safely fail to do so), and your particular guesses about ethical-hidden-variables shouldn't matter.
But I can imagine plausible scenarios where various ethical-blind-spots on the part of the FAI team, or people influenced by it, end up mattering a great deal in a pretty terrifying way. (Maybe people in that cluster decide they have a better plan, and leave and do their own thing, where ethical-blind-spots/hidden-variables matter more).
This concern extends beyond vegetarianism and doesn't have a particular recommended course of action beyond "please be careful about your moral reasoning and public discussion thereof", which presumably you're doing already, or trying to.
FAI builders do not need to be saints. No sane strategy would be set up that way. They need to endorse principles of non-jerkness enough to endorse indirect normativity (e.g. CEV). And that's it. Morality is not sneezed into AIs by contact with the builders.
Haven't you considered extrapolating the volition of a single person if CEV for many people looks like it won't work out, or will take significantly longer? Three out of three non-vegetarian LessWrongers (my best model for MIRI employees, present and future, aside from you) I have discussed it with say they care about something besides sentience, like sapience. Because they have believed that that's what they care about for a while, I think it has become their true value, and CEV based on them alone would not act on concern for sentience without sapience. These are people who take MWI and cryonics seriously, probably because you and Robin Hansen do and have argued in favor of them. And you could probably change the opinion of these people, or at least people on the road to becoming like them with a few of blog posts.
Because in HPMOR you used the word "sentience," which is typically used in sci fi to mean sapience, (instead of using something like "having consciousness") I am worried you are sending people down that path by letting them think HPJEV draws the moral-importance line at sapience, besides my concern that you are showing others that a professional rationalist thinks animals aren't sentient.
I did finally read the 2004 CEV paper recently, and it was fairly reassuring in a number of ways. (The "Jews vs Palestinians cancel each other but Martin Luther King and Gandhi add together" thing sounded... plausible but a little too cutely elegant for me to trust at first glance.)
I guess the question I have is (this is less relevant to the current discussion but I'm pretty curious) - in the event where CEV fails to produce a useful outcome (i.e. values diverge too much), is there a backup plan, that doesn't hinge on someone's judgment? (Is there a backup plan, period?)
Indirect Normativity is more a matter of basic sanity than non-jerky altruism. I could be a total jerk and still realize that I wanted the AI to do moral philosophy for me. Of course, even if I did this, the world would turn out better than anyone could imagine, for everyone. So yeah, I think it really has more to do with being A) sane enough to choose Indirect Normativity, and B) mostly human.
Also, I would regard it as a straight-up mistake for a jerk to extrapolate anything but their own values. (Or a non-jerk for that matter). If they are truly altruistic, the extrapolation should reflect this. If they are not, building altruism or egalitarianism in at a basic level is just dumb (for them, nice for me).
(Of course then there are arguments for being honest and building in altruism at a basic level like your supporters wanted you to. Which then suggests the strategy of building in altruism towards only your supporters, which seems highly prudent if there is any doubt about who we should be extrapolating. And then there is the meta-uncertain argument that you shouldn't do too much clever reasoning outside of adult supervision. And then of course there is the argument that these details have low VOI compared to making the damn thing work at all. At which point I will shut up.)