3 min read19th Sep 20156 comments

12

In this community there is a respected technique of "shutting up and multiplying". However using it in many realistic ethical dillemas can be difficult. Imagine a situation: there is a company, and each its employee gains utility for pressing buttons. Each employee has a one-use only button that when pressed gives an employee one hundred units of utility, while all the others lose a unit each. They can't communicate about the button and there are no other effects. Is it ethical to press the button?

This is an extremely simple situation. Utilitarianism, no matter which one, would easily say that it's ethical to press the button if there are less than one hundred and one employee and unethical if more than one hundred and one. I believe (the proponents of other ethical theories may correct me if I'm wrong) that both virtue ethics (a person demonstrates a vice by pressing a button) and deontology (that's a kind of stealing and stealing is wrong) as they're usually used (and not as a utiitarianism substitute) would say it's wrong to be the first one to press the button, and so all the eleven employees would lose ninety utils.

But the only reason this situation is so simple under utilitarianism is that we've got a direct access to the employees utility functions. Usually, though, that's not the case. If we want to make a decision in a common question such as "is it ethical to throw a piece of trash on the road, or is it better to carry it to the trash bin" or "is it okay to smoke in a room with other people inside" we had to calculate the utility we gain from throwing it right here versus the utility of all the people. We can also use quick rules, which would say "no" in the both situations. But if there's no rule or two rules, or we don't trust one, then it would be useful to have a method that's more reliable than our Fermi estimations of utility or even money.

I believe there is such a method, and as you probably already figured out, it's the question "what would've happened if everyone does something like this". It's most often used in the context of deontology, but for a utilitarian it allows to feel the shared costs.

What am I talking about? Imagine we have to estimate if we should throw a piece of trash on a road. To calculate we're taking the number of people N that will be travelling this road, calculate their average loss for irritation R of seeing a piece of trash on the road and multiply them. The NR we got we have to compare to the loss X of taking the trash to the bin. Is it difficult to get the sign right? I guess it is. Now let's imagine every traveller has thrown a piece of trash. Now let's suppose your loss of utility is the same for each piece of trash you see and your irritation is about average for the travellers here. How much utility are you going to lose? The same NR. But now imagining this loss and comparing it to the loss of hauling the trash to the bin is much easier and I believe is even more accurate.

To use this method right a utilitarian should be careful not to make a few errors. I'm going to demonstrate a few points using a "smoking in a crowded room" example.

First of all, we shouldn't use worldbuilding too much. "If everyone here always smoked, they'd install a powerful ventilation system, so I'd be okay". That wouldn't sum the utilities in a right way because the ventilation system doesn't exist. So we should change only a single aspect of behavior and not any reactions for that.

Second, we have to remember that a sum of effects is not always a good substitution for a sum of utilities. That's why we cannot say something like: "If everyone here smoked, we'd die of suffocation, so smoking here is as bad as killing a person". That's as an addition of "don't judge people on the utility of what they do, judge them when judging has a high utility" aspect.

I believe the second point may work to the opposite direction with the trash example. That is, the more trash there is, the less irritation a single piece gives. That means to counter this effect we have to imagine there is more trash than if everyone threw it away once.

And the third point is that the person doing the calculation is not always similar to the average one. "If everyone smoked I'd be okay, I've got no problem with rooms full of smoke" fails to calculate the total utility of people there unless they're all smokers, and maybe even then.

This method if used correctly may be a good addition to the well-known here "shut up and multiply" and also is an example of a good tradition of stealing ideas from differing theories.

(I'm not a native speaker and I haven't got much experience of writing in English, so I'd be especially grateful for any grammar corrections. I don't know if the tradition here is to send them via PM or to use a special thread)

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 6:30 PM

I had trouble clearly associating your examples with your expositions and the method to go about to solve them. Could you give a short step by step procedure (bullet point list) for your proposed 'kant multiplication'?

One minor nitpick: In the second paragraph you imply that 11 employees loose utility without mentioning the specific number earlier.

[-][anonymous]9y30

Well, the obvious objection is that clearly not everybody's going to do what you do, so your hypothetical scenario is often going to be irrelevant. Furthermore, I'd think that

"If everyone here always smoked, they'd install a powerful ventilation system, so I'd be okay" is exactly what you should think. Of course, you should factor in the cost of the ventilation system, but that those costs exist isn't any reason to assume that the marginal change in utility you effect by your actions is going to stay constant when multiplied by seven billion.

I've just noticed that I'm confused, and that's because your comments on the second error seem to be saying that you should shut up and sum utilities, which kind of renders your comments on the first (and my reply) obsolete. Oh well.

I'll just point out that if you could measure utilities well enough to actually shut up and multiply, you wouldn't need this kind of heuristic.

Also, this heuristic fails miserably in the face of any kind of conflict. Of course unilateral disarmament works if everybody does it at the same time. While I understand that your heuristic isn't supposed to be used in such cases, you'll find actual situations without underlying conflicts are rather difficult to find.

Finally, your grammar is mostly fine and certainly no significant obstacle to communication.

When you can multiply you don't need this or any other heuristic. You just do that. This method is a method of adding utility using System 1 instead of System 2 thinking, as you don't round small disutilities to other people down to zero. Often if some action gives a good utility calculation in a separate case but doesn't generalize, it may be not a good idea because of small disutilities it creates. And the technique I'm talking about is mostly useful when it's difficult to put a number on the utilities in question. It's similar to collecting all the losses and gains it gives to other people and applying them all to the person using the calculation. When it's possible, the heuristic works, when it's not possible, this method usually fails.

You can put numbers on utilities when it's about lives or QALYs. A lot of important questions are this. The generalization method on the other hand may help when dealing with some more... trivial matters. Hurt feelings, minor inconveniences and so on. Less important, sure, but still quite common, I believe.

It fails in at least some conflicts, good catch. I'd have to think when it does and when it doesn't and maybe update the post.

I don't agree with this line of argument. Suppose there are five employes, and they all press the button. Each receives 100 utils, and loses 4, leading to a net gain of 96 each. Why is this not the ethically correct outcome, even for a deontologist?

This is a nice way to look at it, even though it neglects the game theory aspects of Kant.

You lost me at ethical. The proposed situation is simple enough that you should define all the pieces and slap down your Schelling fences instead of trying to paste arbitrary labels on it.