Comment author: CAE_Jones 16 June 2013 12:39:37PM 10 points [-]

There seems to be, based just on my non-rigorous observations, significant overlap between the Vegan/Vegetarian communities and the "Genetically Modified Foods and big Pharma will turn your babies into money-forging cancer" theorists. Obviously not all Vegans are "chemicals=bad because nature" conspiracy theorists, and not all such conspiracy theorists are vegan, but the overlap seems significant. That vocal overlap group strikes me as likely to oppose lab-grown meat because it's unnatural, and then the conspiracy theories will begin. And the animal rights groups probably don't want to divide up their base any further.

(This comment felt harsh to me as I was writing it, even after I cut out other bits. The feeling I'm getting is very similar to political indignation. If this looks as mind-killd to anyone else, please please correct me.)

Comment author: freeze 16 October 2015 08:56:49PM 0 points [-]

Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs.

Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.

Comment author: Jiro 07 September 2015 10:18:18PM *  1 point [-]

I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.

But the original argument is that we shouldn't eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can't be specified or controlled in detail, so we have to worry how the AI would treat us.

If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn't show a real problem--if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn't use that as a basis to mistreat us.

Comment author: freeze 16 October 2015 04:41:07PM 0 points [-]

Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.

Though I've spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I'm probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.

Comment author: Jiro 03 September 2015 08:40:03PM *  1 point [-]

The question is really "why does the AI have that exact limit". Phrased in terms of classes, it's "why does the AI have that specific class"; having another class that includes it doesn't count, since it doesn't have the same limit.

Comment author: freeze 06 September 2015 02:58:01PM -1 points [-]

After significant reflection what I'm trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).

Furthermore, there are a lot of edge cases of humanity where people can't learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn't matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.

Comment author: Jiro 03 September 2015 07:32:55PM *  1 point [-]

That's the kind of thing I was objecting to. "'Other animals' are capable of feeling pain" is an independent argument for vegetarianism. Adding the AI to the argument doesn't really get you anything, since the AI shouldn't care about it unless it was useful as an argument for vegetarianism without the AI.

It's also still a gerrymandered reference class. "The AI cares about how we treat other beings that feel pain" is just as arbitrary as "the AI cares about how we treat 'other animals'"--by explaining the latter in terms of the former, you're just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn't the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1/3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?

Comment author: freeze 03 September 2015 08:15:12PM -2 points [-]

Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.

Comment author: Lumifer 03 September 2015 06:49:09PM 0 points [-]

Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don't see why are you so focused on the diet.

Comment author: freeze 03 September 2015 08:12:45PM -1 points [-]

Vegans as a general category don't unnecessarily harm and certainly don't eat insects either. I'm not just focused on the diet actually.

Come to think of it, what are we even arguing about at this point? I didn't understand your emoticon there and got thrown off by it.

Comment author: Jiro 03 September 2015 04:07:11PM 1 point [-]

"Other animals" is a gerrymandered reference class. Why would the AI specifically care about how we treat "other animals", as opposed to "other biological entities", "other multicellular beings", or "other beings who can do mathematics"?

Comment author: freeze 03 September 2015 05:29:11PM -1 points [-]

Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren't in general.

Comment author: Lumifer 03 September 2015 03:53:33PM 0 points [-]

Go start recruiting Jains as AI researchers... X-/

Comment author: freeze 03 September 2015 05:28:28PM -1 points [-]

I don't see why. Jainism is far from the only philosophy associated with veganism.

Comment author: Viliam_Bur 12 June 2013 07:50:59PM 10 points [-]

You could also reduce meat consumption by advertising good vegetarian meal recipes.

(Generally, the idea is that you can reduce eating meat even without explicitly promoting not eating meat.)

Comment author: freeze 03 September 2015 04:30:17PM 0 points [-]

Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect

Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.

Comment author: SaidAchmiz 13 June 2013 02:26:06PM 1 point [-]

What were the moral arguments for vegetarianism that you found utterly unconvincing? Where did you hear or read these?

The ones that say we should care about what happens to animals and what animals experience, including arguments from suffering. I've heard them in lots of places; the OP has himself posted an example — his own essay "Why Eat Less Meat?"

Are you interested in reducing the suffering of humans?

Yeah.

If so, why?

I think if you unpacked this aspect of my values, you'd find something like "sapient / self-aware beings matter" or "conscious minds that are able to think and reason matter". That's more or less how I think about it, though converting that into something rigorous is nontrivial. "Matter" here is used in a broad sense; I care about sapient beings, think that their suffering is wrong, and also consider such beings the appropriate reference class for "veil of ignorance" type arguments, which I find relevant and at least partly convincing.

My caring about reducing human suffering has limits (in more than one dimension). It is not necessarily my highest value, and interacts with my other values in various ways, although I mostly use consequentialism in my moral reasoning and so those interactions are reasonably straightforward for the most part.

Comment author: freeze 03 September 2015 03:54:25PM 0 points [-]

Do you think that animals can suffer?

Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?

Comment author: Jiro 14 June 2013 07:16:41PM 2 points [-]

That argument would seem to apply to plants or even to non-intelligent machines as well as to animals, unless you include a missing premise stating that AI/human interaction is similar to human/animal interaction in a way that 1) human/plant or human/washing machine interaction is not, and 2) is relevant. Any such missing premise would basically be an entire argument for vegetarianism already--the "in comparison to AIs" part of the argument is an insubstantial gloss on it.

Furthermore, why would you expect what we do to constrain what AIs do anyway? I'd sooner expect that AIs would do things to us based on their own reasons regardless of what we do to other targets.

Comment author: freeze 03 September 2015 03:49:47PM -1 points [-]

Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.

More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).

View more: Next