i've never heard of a carnivore who thought meat eating was morally better.
Katja Grace claimed to me that being a total utilitarian led her to prefer eating meat, since eating animals creates a reason for the animals to exist in the first place, and she imagines they'd prefer to exist for a while, and then be slaughtered, than not exist at all.
I tend to hang out in the average utilitarian camp, so that one didn't move me much. On the other hand:
Oh, you want utilitarian logic? One serving of utilitarian logic coming up: Even in the unlikely chance that some moron did manage to confer sentience on chickens, it's your research that stands the best chance of discovering the fact and doing something about it. If you can complete your work even slightly faster by not messing around with your diet, then, counterintuitive as it may seem, the best thing you can do to save the greatest number of possibly-sentient who-knows-whats is not wasting time on wild guesses about what might be intelligent. It's not like the house elves haven't prepared the food already, regardless of what you take onto your plate.
Harry considered this for a moment. It was a rather seductive line of reasoning -
Good! said Slytherin. I'm glad you see now that the most moral thing to do is to sacrifice the lives of sentient beings for your own convenience, to feed your dreadful appetites, for the sick pleasure of ripping them apart with your teeth -
What? Harry thought indignantly. Which side are you on here?
His inner Slytherin's mental voice was grim. You too will someday embrace the doctrine... that the end justifies the meats. This was followed by some mental snickering.
I'm pretty sure that the maximally healthy diet for me contains meat, that I can be maximally effective in my chosen goals when maximally healthy, and that my likely moral impact on the world makes sacrifices on the order of a cow per year (note that cows are big and hamburgers are small) look like a rounding error.
In this post, I discuss a theoretical strategy for finding a morally optimum world in which -- regardless of my intrinsic moral preferences -- I fold to the preferences of the most moral minority.
I don't personally find X to be intrinsically immoral. I know that if some people knew this about me, they might feel shocked, sad and disgusted. I can understand how they would feel because I feel that Y is immoral and not everyone does, even though they should.
These are unpleasant feelings and, combined with the fear that immoral events will happen more frequently due to apathy, I'm willing to fold X into my category of things that shouldn't happen. Not because of X itself, but because I know it makes people feel bad.
This is more than the gaming strategy I'll be anti-X if they'll be anti-Y. This is a reflection that the most moral world is a world in which people's moral preferences are maximally satisfied, so that no one needs to feel that their morality is marginalized and suffer the feelings of disgust and sadness.
Ideal Application: Nested Morality Model
The sentiment and strategy just described is ideal in the case of a nested model of moralities in which preferences can be roughly universally ranked from most immoral to least immoral: X1, X2, X3, X4, ... . Every one has an immoral threshold where they no longer care. For example, all humans consider the first few elements to be immoral. However, only the most morally sensitive humans care about the elements after the first few thousand. In such a world where this model was accurate, it would be ideal to fold to the morality of the most morally sensitive. Not only would you be satisfying the morality of everyone, you could be certain that you were also satisfying the morality of your most moral, future selves, especially by extending the fold a little further out.
Figure: Hierarchy of Moral Preferences in the Nested Morality Model
Note that in this model it doesn't actually matter if individual humans would rank the preferences differently. Since they're all satisfied, the ordering of preferences doesn't matter. Folding to the most moral minority should solve all moral conflicts that result from varying sensitivity to a moral issue, regardless of differences in relative rankings. For example, by such a strategy I should become a vegetarian (although I'm not).
Real Life Application: Very Limited
However, in reality, moral preferences aren't nested in sensitivity, but conflicting. Someone may have a moral preference for Y, while someone else may have a clear preference for ~Y. Such conflicts are not uncommon and may represent the majority of moral conflicts in the world.
Secondly, even if a person is indifferent about the moral value of Y and ~Y, they may value the freedom or the diversity of having both Y and ~Y in the world.
When it comes to the latter conflicts, I think that the world would be a happier place if freedom and diversity suffered a little bit for very strong (albeit minority) moral preferences. However, freedom and diversity should not suffer too much for very weak or very small sample size preferences. With such a trade-off situation, an optimum can not be found since I don't expect to be able to place relative weights on 'freedom', 'diversity' and an individual's moral preference in a general case.
For now, I think I will simply resolve to (consider) folding to the moral preference Z of a fellow human in the simplest case where I am apathetic about Z and also indifferent to the freedom and diversity of Z and ~Z.