All of freeze's Comments + Replies

freeze00

Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs.

Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.

freeze00

Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.

Though I've spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I'm probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.

freeze00

After significant reflection what I'm trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).

Furthermore, there are a lot of edge cases of humanity where people can't learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn't matter that much; or the very old, mentally disabled, people in comas, etc.). I would... (read more)

0Jiro
But the original argument is that we shouldn't eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can't be specified or controlled in detail, so we have to worry how the AI would treat us. If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn't show a real problem--if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn't use that as a basis to mistreat us.
freeze-10

Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.

0Jiro
The question is really "why does the AI have that exact limit". Phrased in terms of classes, it's "why does the AI have that specific class"; having another class that includes it doesn't count, since it doesn't have the same limit.
freeze00

Vegans as a general category don't unnecessarily harm and certainly don't eat insects either. I'm not just focused on the diet actually.

Come to think of it, what are we even arguing about at this point? I didn't understand your emoticon there and got thrown off by it.

0Lumifer
I'm yet to meet a first-world vegan who would look benevolently at a mosquito sucking blood out of her. I don't think we're arguing at all. That, of course, doesn't mean that we agree. The emoticon hinted that I wasn't entirely serious.
freeze00

Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren't in general.

0Jiro
That's the kind of thing I was objecting to. "'Other animals' are capable of feeling pain" is an independent argument for vegetarianism. Adding the AI to the argument doesn't really get you anything, since the AI shouldn't care about it unless it was useful as an argument for vegetarianism without the AI. It's also still a gerrymandered reference class. "The AI cares about how we treat other beings that feel pain" is just as arbitrary as "the AI cares about how we treat 'other animals'"--by explaining the latter in terms of the former, you're just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn't the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1/3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?
freeze00

I don't see why. Jainism is far from the only philosophy associated with veganism.

0Lumifer
Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don't see why are you so focused on the diet.
freeze00

Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect

Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.

freeze00

Do you think that animals can suffer?

Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?

freeze00

Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.

More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).

0Jiro
"Other animals" is a gerrymandered reference class. Why would the AI specifically care about how we treat "other animals", as opposed to "other biological entities", "other multicellular beings", or "other beings who can do mathematics"?
0Lumifer
Go start recruiting Jains as AI researchers... X-/
freeze00

There are already meat alternatives (seitan, tempeh, tofu, soy, etc.) which provide a meat-like flavor and texture. It's not immediately obvious that in-vitro meat is necessarily more effective than just promoting or refining existing alternatives.

I suppose for long-run impact this kind of research may be orders of magnitude more useful though.

freeze00

Not necessarily. https://xkcd.com/1338/

If you assume that suffering is roughly proportional to number of neurons, then you should care disproportionately about mammal suffering, or even large animals in general; most animals are wild, but they are mostly insects which don't necessarily experience as much suffering each.

freeze-20

You seem to allude to the fact that it really isn't that easy. In fact, if it is truly an AGI then by definition we can't just box in its values in that way/make one arbitrary change to its values.

Instead, I would say if you don't want an AI to treat us like we treat cows, then just stop eating cow flesh/bodily fluids. This seems a more robust strategy to shape the values of an AI we create, and furthermore it prevents an enormous amount of suffering and improves our own health.

freeze10

I don't think your example is a conclusion you would come to if you weren't already concerned about property rights.