I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about.
http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational-gap/
I imagine the line of reasoning you want me to use to be something like this:
"Well, the probability of cow sentience is bounded by 20%, so you shouldn't eat cows."
"How do you get to that conclusion? After all, it's not certain. In fact, it's less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever."
"Well, you should calculate the expectation. 20% large effect + 80% no effect is still enough of a bad effect to care about."
"But I'm never going to get that expectation. I'm either going to get the full effect or nothing at all."
"If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you're almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials."
If there's a component of uncertainty over the probability, that last step doesn't really work, since many trials are still all or nothing when combined.
I wouldn't say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you'll probably get a lot of utility that way. That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge y... (read more)