You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielFilan comments on Link: Rob Bensinger on Less Wrong and vegetarianism - Less Wrong Discussion

11 Post author: Sysice 13 November 2014 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielFilan 14 November 2014 10:01:33PM -1 points [-]

I can understand why you shouldn't incentivise someone to possibly torture lots of people by being the sort of person who gives in to Pascal's mugging (in the original formulation). That being said, here you seem to be using Pascal's mugging to refer to doing anything with high expected utility but low probability of success. Why is that irrational?

Comment author: Jiro 15 November 2014 02:28:29AM *  1 point [-]

Actually, I'm using it to refer to something which has high expected utility, low probability of success, and a third criterion: you are uncertain about what the probability really is. A sweepstakes with 100 tickets has a 1% chance of winning. A sweepstakes which has 2 tickets but where you think there's a 98% chance that the person running the sweepstakes is a fraudster also has a 1% chance of winning, but that seems fundamentally different from the first case.

Comment author: DanielFilan 15 November 2014 10:56:22AM 1 point [-]

you are uncertain about what the probability really is

I think this is a misunderstanding of the idea of probability. The real world is either one way or another, either we will actually win the sweepstakes or we won't. Probability comes into the picture in our heads, telling us how likely we think a certain outcome is, and how much we weight it when making decisions. As such, I don't think it makes sense to talk about having uncertainty about what a probability really is, except for the case of a lack of introspection.

Also, going back to Robby's post:

We don’t know enough about how cattle cognize, and about what kinds of cognition make things moral patients, to assign a less-than-1-in-20 subjective probability to ‘factory-farmed cattle undergo large quantities of something-morally-equivalent-to-suffering’.

This seems like an important difference to what you're talking about. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat. If you grant that your probability does in fact obey such a bound, and that that bound suffices for the case for veg*nism, then I think the result follows, whether or not you call it a Pascal's mugging.

Comment author: Jiro 15 November 2014 07:26:56PM *  0 points [-]

If you don't like the phrase "uncertainty about the probability", think of it as a probability that is made up of particular kinds of multiple components.

The second sweepstakes example has two components, uncertainty about which entry will be picked and uncertainty about whether the manager is honest. The first one only has uncertainty about which entry will be picked. You could split up the first example mathematically (uncertainty about whether your ticket falls in the last two entries and uncertainty about which of the last two entries your ticket is) but the two parts you get are conceptually much closer than in the second example.

. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat.

Like the possibility that the sweepstakes manager is dishonest, "we don't know enough about how cattle cognize" is all or nothing; if you do mulitple trials, the distribution is a lot more lumpy. If all cows had exactly 20% of the capacity of humans, then five cows would have 100% in total. If there's a 20% chance that cows have as much as humans and an 80% chance that they have nothing at all, that's still a 20% chance, but five cows would have a lumpy distribution--instead of five cows having a guaranteed 100%, there would be a 20% chance of having 500% and an 80% chance of nothing.

In some sense, each case has a probability bounded by 20% for a single cow. But in the first case, there's no chance of 0%, and in the second case, not only is there a chance of 0%, but the chance of 0% doesn't decrease as you add more cows. The implications of "the probability is bounded by 20%" that you probably want to draw do not follow in the latter case.

Comment author: DanielFilan 15 November 2014 09:28:18PM *  0 points [-]

the two parts you get are conceptually much closer than in the second example.

I still don't see why this matters? To put things concretely, if I would be willing to buy the ticket in the first sweepstakes, why wouldn't I be willing to do so in the second? Sure, the uncertainty comes from different sources, but what does this matter for me and how much money I make?

The implications of "the probability is bounded by 20%" that you probably want to draw do not follow in the latter case.

If I understand you correctly, you seem to be drawing a slightly distinction here than I thought you were, claiming that the distinction is between 100% probability of a cow consciousness that is 20% as intense as human consciousness, as opposed to a 20% probability of a cow consciousness that is 100% as intense as human consciousness (for some definition of intensity). Am I understanding you correctly?

In any case, I still think that the implications that I want to draw do in fact follow. In the latter case, I would think that eating meat has a 20% chance of producing a really horrible effect, and an 80% chance of being mildly convenient for you, so you definitely shouldn't eat meat. Is there something that I am missing?

ETA: Again, to put things more concretely, consider theory X: that whenever 50 loaves of bread are bought, someone creates a human, keeps them in horrible conditions, and then kills them. Your probability for theory X being true is 20%. If you remove bread from your diet, you will have to learn a whole bunch of new recipes, and your diet might be slightly low in carbohydrates. Do you think that it is OK to continue eating bread? If not, your disagreement with the case for veg*nism is a different assessment of the facts, rather than a condemnation of the sort of probabilistic reasoning that is used.

Comment author: Jiro 16 November 2014 02:48:23AM 0 points [-]

I imagine the line of reasoning you want me to use to be something like this:

"Well, the probability of cow sentience is bounded by 20%, so you shouldn't eat cows."

"How do you get to that conclusion? After all, it's not certain. In fact, it's less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever."

"Well, you should calculate the expectation. 20% * large effect + 80% * no effect is still enough of a bad effect to care about."

"But I'm never going to get that expectation. I'm either going to get the full effect or nothing at all."

"If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you're almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials."

If there's a component of uncertainty over the probability, that last step doesn't really work, since many trials are still all or nothing when combined.

Comment author: DanielFilan 16 November 2014 03:40:02AM 0 points [-]

I wouldn't say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you'll probably get a lot of utility that way. That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that's beside the point).

Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the "cows are sentient" or "theory X is true" outcomes a probability of 20%, the bad outcomes are so bad that we shouldn't risk them. The fact that you assign probability >50% to no damage happening isn't a suffcient condition to establish "taking the risk is OK".

Comment author: Jiro 16 November 2014 09:19:12AM *  0 points [-]

That being said, I don't want to talk about the long run at all, because we don't make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn't work in that case, although I would urge you to not eat the bacon omelette.

The point is that given the way these probabilities add up, not only wouldn't that work for a single bacon omelette, it wouldn't work for a lifetime of bacon omelettes. They're either all harmful or all non-harmful.

Therefore, even if you only assign the "cows are sentient" or "theory X is true" outcomes a probability of 20%, the bad outcomes are so bad that we shouldn't risk them.

Your reasoning doesn't depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal's Mugging. And I don't accept Pascal's Mugging.

Comment author: DanielFilan 16 November 2014 11:01:43AM *  0 points [-]

The point is that given the way these probabilities add up, not only wouldn't that work for a single bacon omelette, it wouldn't work for a lifetime of bacon omelettes. They're either all harmful or all non-harmful.

I know. Are you implying that we shouldn't maximise expected utility when we're faced with lots of events with dependent probabilities? This seems like an unusual stance.

Your reasoning doesn't depend on the exact number 20... If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid.

My reasoning doesn't depend on the exact number 20, but the probability can't be arbitrarily low either. If the probability of cow sentience were only 1/1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.

In other words, your reasoning proves too much; it would imply accepting Pascal's Mugging. And I don't accept Pascal's Mugging.

Again, this depends on what you mean by Pascal's Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you're an average American, approximately 200, only 30 if you don't eat seafood, and only 1.4 if you also don't eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.

However, if by Pascal's Mugging you mean "maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty", then yes, you should accept Pascal's Mugging, and I have never seen a convincing argument that you shouldn't. Also, please don't call that Pascal's Mugging, since it is importantly different from its namesake.

Comment author: Jiro 16 November 2014 05:45:32PM *  0 points [-]

Are you implying that we shouldn't maximise expected utility when we're faced with lots of events with dependent probabilities? This seems like an unusual stance.

I would limit this to cases where the dependency involves trusting an agent's judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.

the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices

You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.

It's true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal's Mugging, but it still amounts to picking the right figure to get the right answer.