Most of the usual thought experiments that justify expected utilitarialism trade off fun for fun, or suffering for suffering. Here's a situation which mixes the two. You are offered to press a button that will select a random person (not you) and torture them for a month. In return the machine will make N people who are not suffering right now have X fun each. The fun will be of the positive variety, not saving any creatures from pain.

1) How large would X and N have to be for you to accept the offer?

2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?

New Comment
48 comments, sorted by Click to highlight new comments since:

There's something that's always bothered me about these kinds of utilitarian thought experiments.

First of all, I think it's probably better to speak in terms of pain rather than torture. We can intelligently discuss trade offs like this in terms of, "I'm going to punch someone" or "I'm going to break someone's leg. How much fun would it take to compensate for that?". Torture is another thing entirely.

If you have a fun weekend, then you had an enjoyable couple of days. Maybe you gained some stories that you can tell for a month or two to your friends who weren't with you. If it was a very fun weekend, you might have learned something new, like how to water ski, something that you'll use in the future. Overall, this is a substantial positive benefit.

If you torture someone for half an hour, not even an entire weekend, it's going to have a much larger effect on someone's life. A person who is being tortured is screaming in agony, flailing around, begging for the pain to stop. And it doesn't. Victims of torture experience massive psychological damage that continues for long after the actual time of the act. Someone who's tortured for half an hour is going to remember that for the rest of their lives. They may have nightmares about it. Almost certainly, their relationships with other people are going to be badly damaged or strained.

I've never been tortured. I've never been a prisoner of war, or someone who was trying to withhold information from a government, military, or criminal organization who wanted it. I have lived a pretty adventurous life, with sports, backpacking, rock climbing, etc. I've had some fairly traumatic injuries. I've been injured when I was alone, and there was nobody within earshot to help me. At those times, I've just lain there on the ground, crying out of pain, and trying to bring myself to focus enough to heal myself enough to get back to medical care. Those experiences are some of the worst of my life. I have an hard time trying to access those memories; I can feel my own mind flinching away from them, and despite all of my rationality, I still can't fight some of those flinches. What I experienced wasn't even all that terrible. They were some moderate injuries. Someone who was tortured is going to have negative effects that are ridiculously worse than what I experienced.

I've spent some time trying to figure out exactly what it is about torture that bothers me so much as a utilitarian, and I think I've figured it out, in a mathematical sense. Most utilitarian calculations don't factor in time. It's not something that I've seen people on less wrong tend to do. It is pretty obvious, though. Giving someone Y amount of pain for 5 minutes is better than giving them Y amount of pain for 10 minutes. We should consider not just how much pain or fun someone's experiencing now, but how much they will experience as time stretches on.

Getting back to the original question, if I could give three or four people a very fun weekend, I'd punch someone. If I could give one person an extremely fun weekend, I'd punch someone. I'd punch them pretty hard. I'd leave a bruise, and make them sore the next day. But if I'm torturing someone for a month, I am causing them almost unimaginable pain for the rest of their life. X and N are going to have to be massive before I even start considering this trade, even from a utilitarian standpoint. I can measure pain and fun on the same scales, but a torture to fun conversion is vaguely analogous to comparing light years to inches.

Getting back to the original question, if I could give three or four people a very fun weekend, I'd punch someone. If I could give one person an extremely fun weekend, I'd punch someone. I'd punch them pretty hard. I'd leave a bruise, and make them sore the next day.

Thanks, this counts as an answer for my purposes.

This is brilliant; I have one nitpicky question:

If the main reason a small amount of torture is much worse than we might naively expect is that even small amounts of torture leave lasting, severe psychological damage, should we expect the disutility of torture to level off after a few (days/months/years)?

In other words, is there much difference between torturing one person for half an hour followed by weeks of moderate pain for that person and torturing one person for the same amount of weeks? The kind of difference that would justify denying, say, hundreds of people a fun weekend where they all learn to waterski?

My intuition is that a long period of torture is worse than death, and I can understand why even a short period of torture would be almost as bad as death, but I'm not sure how to measure how much worse than death a very long period of torture can get.

Incidentally, are there other "primary" moral goods/bads in naturalist metaethics besides fun, pain, torture, and life/death?

If the main reason a small amount of torture is much worse than we might naively expect is that even small amounts of torture leave lasting, severe psychological damage, should we expect the disutility of torture to level off after a few (days/months/years)?

In other words, is there much difference between torturing one person for half an hour followed by weeks of moderate pain for that person and torturing one person for the same amount of weeks? The kind of difference that would justify denying, say, hundreds of people a fun weekend where they all learn to waterski?

I'm not sure what exactly you're getting at with that specific example. I think that yes, torturing someone for weeks, followed by years of psychological pain is significantly worse than torturing someone for half an hour followed by weeks of (probably a bit less severe) psychological pain.

Your general point, however, I think definitely has some merit. Personally, I wouldn't expect to see much psychological difference between an individual who was tortured for five years versus a person who was tortured for ten. I would definitely expect to see a larger difference between someone tortured for six years versus someone tortured for one. Certainly there's a massive difference between 5 years and 0. There probably is some sort of leveling off factor. I don't know exactly where it is, or what that graph would look like, but it probably exists, and that factor definitely could influence a utilitarian calculation.

If we're talking about torture vs death, if we're using preference utilitarianism, we can say that the point where the torture victim starts begging for death is where that dividing line can be drawn. I don't know where that line is, and it's not an experiment I'm inclined to try anytime soon.

I had precisely the same reaction about the persistent effects of torture over time when I read the torture vs. dust specks problem.

I think your reply to the original question highlights a difficulty with actual application of utilitarian thought experiments in concrete situations. The question as originally posed involved inflicting disutility on a random target by pushing a button, presumably with the actor and the target being mutually unaware of each other's identities. When you substitute punching someone, even if the target is randomly chosen, the thought experiment breaks down, both because it becomes difficult to predict the actual amount of suffering that is going to result (e.g. the target has a black belt and/or concealed weapon and no sense of humor = more pain than you were expecting), and because of second order effects: a world in which inflicting random pain is considered justified if it produces a greater amount of fun for other individuals is going to be a world with a lot less social trust, reducing utility for everybody.

But, hang on. Grant that there's some amount of disutility from permanent damage caused by torture. Nevertheless, as you add more specks, at some point you're going to have added more disutility, right? Suppose the torture victim lives for fifty years after you're done with him, and he's an emotional and physical wreck for every day of those fifty years; nevertheless, this is a finite amount of disutility and can be compensated for by inserting a sufficient number of Knuth up-arrows between the numerals. Right?

[-][anonymous]00

Prismattic:

in concrete situations

Rolf:

and can be compensated for by inserting a sufficient number of Knuth up-arrows between the numerals

I think for us "concrete situations", as meant here, does have a way lower border than just "for some natural number N". I think no parent of your comment disputed that we are still dealing with finite amounts of (dis)utility.

Jeremy Bentham actually mentioned this in the initial form of utilitarianism. He said that the "felicific calculus" required one to take into account fecundity, which was a measure of how likely a pleasurable/painful experience would cause more pleasure/pain in the future, and purity, which measured how likely a pleasurable experience was to not be followed by a painful one, or vice versa.

Getting back to the original question, if I could give three or four people a very fun weekend, I'd punch someone. If I could give one person an extremely fun weekend, I'd punch someone. I'd punch them pretty hard. I'd leave a bruise, and make them sore the next day.

I don't know about you, but if I didn't have at least one bruise and feel sore after a weekend of extreme fun then I'd start to think I was doing it wrong. ;)

Lol. I'm inclined to agree with you there. However, considering that I'm writing this while I lay in bed with my foot propped up, having shattered a few bones during my last "weekend of extreme fun", I'm beginning to reevaluate my priorities. ;)

Ouch! I suppose my intuitions are influenced a little by the fact that I have never broken a bone or had any sort of serious injury. I suspect my priorities may be changing too in your shoes. (Well, possibly shoe, probably not the plural. Too soon? :P)

Okay, now what if you give them a drug that makes them forget the torture? And maybe also keep them drugged-up until they heal. How would that compare?

That is a good point. If you could wipe their memories in such a way that they didn't have any lasting psychological damage, that would make it significantly better. It's still pretty extreme; a month is a long time, and if we're talking about a serious attempt to maximize their pain during that time, there's a lot of pain that we'd have to cancel out. X and N will still need to be very large, but not as large as without the drugs.

Unless I'm missing something, using different scales doesn't actually preclude utilitarian calculations.

There's no obvious natural conversion between fun and pain (hence the question), but any monotonic conversion function that we choose to adopt will give us well-defined values of X and N. We can construct a utilitarianism around any of these possibilities: if X or N are very large or infinite, that just means that we're approaching pure negative utilitarianism, defining utility in terms of minimizing suffering without taking fun into account. Deontological considerations need not apply.

Indeed, of the major utility functions that I'm aware of, this problem only seems to arise within pleasure/pain utilitarianism; as stated, negative utilitarianism doesn't admit to the existence of a finite exchange rate for torture (which seems rather shaky in light of preferences like those expressed in other comments), while preference utilitarianism's results are going to vary by individual and case but are always individually well-defined.

(This comment was originally buried more deeply, but on reflection it seems to have more to do with the OP than with the comment I was replying to.)

N=1,X= -(T+ epsilon) where T is the amount of antifun that a month of torture is.

But I really don't like that answer.

Some thinking out loud follows:

  • It's clear to me that I don't want to answer this question, for more or less the same reasons that I don't want to explain how much suffering I would be willing to impose upon you in exchange for X amount of fun for me. Which, of course, doesn't necessarily stop me from doing it, as long as I don't have to admit to it.

  • It's relatively clear to me that intuitively I'd give a different answer if I started from a small X and worked my way up by increments asking "Is this enough?" than if I started from a large X and worked my way down by increments asking "Is this too little?" From which I conclude that my intuitions on this subject are not reliable. Which I knew already from earlier conversations about "utilon-trade" scenarios.

  • It's clear to me that, while I have a fairly concrete understanding of torture, I have a very fuzzy understanding of fun. So when I say "X=-T + epsilon", I'm comparing concrete-torture-apples to fuzzily-imagined-fun-oranges, and I once again have no reliable intuitions.

  • If I screen off all of my real-world understanding of torture, so I can do a fuzzily-imagined-apples-to-fuzzily-imagined-apples comparison, it's a simple optimization question: is 1 (T+ epsilon) - 1 T positive? Why, yes it is. Great, do it! But the minute I unscreen that real-world understanding, I'm back to not wanting to answer that question.

  • What I'd prefer to do is unpack "-T amount of fun" into something equally concrete and do the comparison that way, but I don't seem to know how to do that.

2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?

I'm not anything yet. And if I were a utilitarian, I would put a very large disutility to torture, and only a small plus to fun. I don't think it has to do with the system you use...

Wow, this is what sanity looks like. The best answer yet. Thanks.

Reversal test:

One person is about to be tortured for a month, and N people are about to have X fun each, but you can prevent it all by pushing a button. For what values of X and N do you push the button?

Good idea. It doesn't change my own answer, though. YMMV.

This reminds me of the horrible SIAI job interview question: "Would you kill babies if it was intrinsically the right thing to do? If so, how right would it have to be, for how many babies?"

That one's easy - the correct answer is to stop imagining "rightness" as something logically independent from torturing babies, and forget the word "intrinsic" like a bad dream. My question looks a little more tricky to me, it's closer to torture vs dust specks.

I wasn't saying that they were similar questions, just that one reminded me of the other. (Though I can see why one would think that.)

I'd say the answer to this is pretty simple. Laura ABF (if I remember the handle correctly) suggested of the original Torture vs. Specks dilemma that the avoided specks be replaced with 3^^^3 people having really great sex - which didn't change the answer for me, of course. This provides a similar template for the current problem: Figure out how many victims you would dustspeck in exchange for two beneficiaries having a high-quality sexual encounter, then figure out how many dustspecks correspond to a month of torture, then divide the second number by the first.

Laura ABF (if I remember the handle correctly)

I suspect it was probably LauraABJ.

Figure out how many victims you would dustspeck in exchange for two beneficiaries having a high-quality sexual encounter

I guess you meant "how many sexual encounters would you demand to make a million dustspecks worthwhile". And my emotional response is the same as in the original dilemma: I find it reeeallly icky to trade off other people's pain for other people's pleasure (not a pure negative utilitarian but pretty close), even though I'm willing to suffer pain myself in exchange for relatively small amounts of pleasure. And it gets even harder to trade if the people receiving the pain are unrelated to the people receiving the pleasure. (What if some inhabitants of the multiverse are marked from birth as "lower class", so they always receive the pain whenever someone agrees to such dilemmas?) I'm pretty sure this is a relevant fact about my preferences, not something that must be erased by utilitarianism.

And even in the original torture vs dustspecks dilemma the answer isn't completely obvious to me. The sharpest form of the dilemma is this: your loved one is going to be copied a huge number of times, would you prefer one copy to be tortured for 50 years, or all of them to get a dustspeck in the eye?

I find it reeeallly icky to trade off other people's pain for other people's pleasure

Does it work the same in reverse? How many high-quality sexual encounters would you be willing to interrupt while you are out saving people from dustspecks?

Interrupting sexual encounters isn't the same as preventing them from occurring without anyone knowing. Regardless of what utilitarianism prescribes, the preferences of every human are influenced by the welfare level they have anchored to. If you find a thousand dollars and then lose them, that's unpleasant, not neutral. Keep that in mind, or you'll be applying the reversal test improperly.

[-][anonymous]00

My problem mentioned that the people receiving additional pleasure must be currently at average level of pleasure. Having your sex encounter interrupted brings you below average, I think.

Horrible in the sense of being frustrating to have to answer, or horrible in the sense of not being a useful job interview question?

It's exactly the sort of question I hate to be asked when I have nobody to ask for clarification, because I have no idea how I'm supposed to interpret it. Am I supposed to assume the intrinsic rightness has no relation to human values? Is it testing my ability to imagine myself into a hypothetical universe where killing babies actually makes people happier and better off?

It's a popular religious stance that before a certain age, babies have no sin, and get an automatic pass to heaven if they die. I've argued before that if that were the case, one of the most moral things one could do would be killing as many babies as you could get away with. After all, you can only get condemned to hell once, so on net you can get a lot more people into heaven that way than would have gone otherwise.

I can certainly imagine situations where I would choose to kill babies (for example, where I estimate a high probability that this baby continuing to live will result in other people dying), but I assume that doesn't qualify as "intrinsically the right thing to do."

That said, I'm not sure what would.

This kind of "thought experiment" always reminds me of chapter seven in John Varley's Steel Beach, which narrates a utilitarian negotiation between humans and brontosaurs ("represented" by human ecologists sporting animal attributes) mediated by an omniscient computer.

Central Computer (CC): "It is as close to mental telepathy as we're likely to get. The union representatives are tuned into me, and I'm tuned into the dinosaurs. The negotiator poses a question: 'How do you fellows feel about 120 of your number being harvested/murdered this year?' I put the question in terms of predators. A picture of an approaching tyrannosaur. I get a fear response: 'Sorry, we'd rather not, thank you.' I relay it to the unionist, who tells Callie the figure is not acceptable. The unionist proposes another number, in tonight's case, sixty. Callie can't accept that. She'd go broke, there would be no one to feed the stock. I convey this idea to the dinosaurs with feelings of hunger, thirst, sickness. They don't like this either. Callie proposes 110 creatures taken. I show them a smaller tyrannosaur approaching, with some of the herd escaping. They don't respond quite so strongly with the fear and flight reflex, which I translate as 'Well, for the good of the herd, we might see our way clear to losing seventy so the rest can grow fat.' I put the proposal to Callie, who claims the Earthists are bleeding her white, and so on."

"Sounds totally useless to me," I said, with only half my mind on what the CC had been saying. [...]

"I don't see why you should say that. Except that I know your moral stand on the whole issue of animal husbandry, and you have a right to that."

"No, that whole issue aside, I could have told you how this all would come out, given only the opening bid. David proposed sixty, right?"

"After the opening statement about murdering any of these creatures at all, and his formal demand that all--"

"'--creatures should live a life free from the predation of man, the most voracious and merciless predator of all,' yeah, I've heard the speech, and David and Callie both know it's just a formality, like singing the planetary anthem. When they got down to cases, he said sixty. Man, he must really be angry about something, sixty is ridiculous. Anyway, when she heard sixty, Callie bid 120 because she knew she had to slaughter ninety this year to make a reasonable profit, and when David heard that he knew they'd eventually settle on ninety. So tell me this: why bother to consult the dinosaurs? Who cares what they think?"

The CC was silent, and I laughed.

"Tell the truth. You make up the images of meat-eaters and the feelings of starvation. I presume that when the fear of one balances out the fear of the other, when these poor dumb beasts are equally frightened by lousy alternatives--in your judgement, let's remember . . . well, then we have a contract, right? So where would you conjecture that point will be found?"

"Ninety carcasses," the CC said.

"I rest my case."

"You have a point. But I actually do transmit the feelings of the animals to the human representatives. They do feel the fear, and can judge as well as I when a balance is reached."

"Say what you will. Me, I'm convinced the jerk with the horns could have as easily stayed in bed, signed a contract for ninety kills, and saved a lot of effort."

I have a strongly egalitarian utility function: for scenarios with equal net utility Σ (i.e. sum of normalised personal utilities of individual people) I attach a large penalty to those where the utilities are unevenly distributed (at least in absence of reward and punishment motivations). So my utility is not only a function of Σ, but rather of the whole distribution. In practice it means that X and N will be pretty high.

To set the specific value would be difficult, mainly because I use a set of deontological constraints while evaluating my utility. For example, I tend to discard all positive contributions to Σ which are in sharp conflict with my values: for example, pleasure gained by torturing puppies doesn't pass the filter. These deontological constraints are useful because of ethical injunctions.

So even if 7^^^7 zoosadists could experience orgasm watching just one puppy tortured, the deontological filter accepts only the negative contribution to Σ from the puppy's utility function (*). Now this is not much reasonable, but the problem is that neither can I switch off my deontological checks easily, nor do I particularly want to learn that.

So, to answer 1), I have to guess how I would reason in absence of the deontological filters, and am not much confident about the results. But say that for a month of really horrible torture (assuming the victim is not a masochist and hasn't given consent) with no long-term consequences, I suppose that N would be of order 10^8 with X = a week of a very pleasant vacation. (I could have written 10^10 or 10^6 instead of 10^8, it doesn't actually feel any different).

As for 2), the largeness of N doesn't necessarily reflect presence of deontology. It may result either from the far larger disutility of torture compared to utility of pleasure, or from some discount for uneven utility distribution. In my case it is both.

(*) Even if I did get rid of rigid deontology, I would like to preserve some discount for illegitimately gained utilities. So, the value of Σ, taken as argument of my utility function, would be higher if the zoosadists got exactly the same amount of pleasure by other means and the puppy was meanwhile tortured only by accident.

1) A few thoughts:

a. Intuitively, fun per person feels like it is upper bounded (just like utility of money). I cannot imagine what kind of fun for one person could compensate a month of torture. We will fix X and play with N

b. The difficulty seems to arise from the fact that the mechanical answer is to set e.g., X=1, N = ceil(Y + epsilon) (where minus Y is utility of the torture), yet this intuitively does not feel right. Some thoughts on why:

  • Scope insensitivity. We have trouble instinctively multiplying fun by many people (i.e., I can picture one day of fun very clearly, but have trouble multiplying that by millions of people, to get anywhere near compensating for a month of torture). So the mechanical answer would be correct, but our brains cannot evaluate the fun side of the equation “intuitively”.

  • Bias towards “fairness”. Having one person suffer and others have fun to compensate doesn’t feel right (or at least assessing this trade-off is difficult)

  • Bias towards negative utilitarianism. We instinctively prefer minimizing suffering than maximizing utility.

c. When we scale things down the question seems easier. Suppose we set X = an amazing week of fun I will remember for the rest of my life, bringing me joy to think about it again and again. Now I ask how much torture I would put up for that. This is a question I feel I can intuitively answer. I would trade, say Y = 10 seconds of extreme pain for X (assuming no permanent physical damage).
To move to 1 month of torture we need to assess how the utility of a torture session scales with its duration. If it scaled linearly, 1 month of torture is ~2.6 million seconds of torture, which means we should set N= ~260’000 for X defined above. I feel that after a few days, the scale is sub-linear, but it may be super-linear before that (when you start moving into the permanent psychological damage area). I cannot seem to come up with an answer I am comfortable with.

2) As pointed out by Nornagest, If I give you my utility function (mapping all possible pains and pleasures to a number), I’m not sure how you could tell whether pain and pleasure are being measured “on the same scale” or not?

This question has a natural upper bound, determined by the preexisting torture/fun distribution in the world. Most people would hold that it's morally acceptable to make the world larger by adding more people whose life outcomes will be drawn from the same probability distribution as the people alive today. If presently the average person has X0 fun, and fraction T0 of people are tortured without you being able to stop it, then you should press the button if X>X0 and N>1/T0.

A lower bound would presumably be the amount of fun the show 24 provides on average, with N being the ratio of the shows' viewers to prisoners rendered so they could be tortured. (Take That, other side!)

You're assuming that people make decisions consistently. I know people are inconsistent, so I'm only interested in their answers to this concrete question, not in what would be "logically implied" by the rest of their behavior.

Most of the usual thought experiments that justify expected utilitarialism

Is this question only intended for people who do not believe expected utilitarialism is not obviously abominable? I mean... it has SUM() built in as the intrinsic aggregation tool. SUM isn't even remotely stable, let alone an ideal to aspire to.

2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?

I confess my knowledge of the subject is non-existant but everything I have seen has discussed utilitarianism as a sub-set of the broader class of teleological theories, and that not all of these would be uncomfortable with the idea of using different scales. And also that deontologists don't have scales for measuring outcomes at all.

It seems to me that, as a point of human psychology, pain and fun genuinely do have different scales. The reason is that the worse pain is more subjectively intense than the best pleasure, which looks to have obvious evolutionary reasons. And similarly our wince reaction on seeing someone get kicked is a lot stronger than our empathic-happiness at seeing someone have sex. So questions like "how many orgasms equal one crushed testicle" are genuinely difficult, not just unpopular. (Which is not to deny that they are unpopular as well, because few people like to think about hard questions.)

None of which excuses us from making decisions in the case that we have limited resources and can either prevent pain or cause pleasure. I'm just saying that it seems to me that our psychology is set up to have a very hard time with this.

To answer the question: For one orgasm, I will trade a few good hard punches leaving bruises. (Observe, incidentally, that this or a similar trade is made many times daily by volunteers in BDSM relationships; noting that there are people who enjoy some kinds of pain, and also people who don't enjoy the pain at all but who are willing to not-enjoy it because they enjoy giving pleasure to someone else by not-enjoying it. Human psychology is complex stuff.) I will not trade permanent damage (psychological or physical) for any number of orgasms, on the grounds of opportunity costs: You can always get another orgasm, but permanent damage is by construction irrecoverable. In the hypothetical where we gene-engineer humans to be more resilient or orgasms to be more intense, the details will change but not the refusal to trade temporary pleasure for permanent damage.

I must admit that I feel uneasy about the above; it seems I'm saying that there are diminishing returns to orgasms, which doesn't look quite right. Alternatively, perhaps I'm measuring utility as the maximum gain or loss of a single person in my sample of 3^^^^3, rather than summing over everyone - another form of diminishing returns, basically. Nonetheless, this is my intuition, that permanent damage ought to be avoided at any cost in pleasure. Possibly just risk-aversion bias?

It seems I'm saying that there are diminishing returns to orgasms, which doesn't look quite right

That you think it doesn't look right is evidence to me that you are not, and have never been, a chronic masturbator.

[-][anonymous]00

Unless I'm missing something, using different scales doesn't actually affect utilitarian calculations. For the question to be coherent, there has to be a conversion between positive and negative utility. Now, in this formulation of the problem, there's no obvious natural conversion between fun and pain, but any monotonic conversion function that we choose to adopt will lead to well-defined tradeoffs and thus a well-defined utilitarianism. Some functions would end up looking rather silly, but presumably we're smart enough not to use those.

Interestingly, of the major act utilitarianisms that I'm aware of, this problem only seems to arise at all in pleasure/pain utilitarianism; negative utilitarianism doesn't admit to the existence of an exchange rate for torture (which seems rather shaky in light of preferences similar to your own), while preference utilitarianism carries a natural conversion methodology.

[-][anonymous]20

a deontologist rather than a utilitarian

The opposite of a deontologist is a consequentialist, not a utilitarian. All utilitarians are consequentialists but not all consequentialists are utilitarians.

(The domain of the above paragraph being restricted to the subset of agents whose moral preferences are described by common one-word terms.)

secret option C: I work towards futures where causal mechanisms flowing between one person's pain and another's pleasure are eliminated.

Isn't this the unsecret option "N would have have to be 0"?

2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?

Not in and of itself. That proof would also require knowing that this deviates from the actual utility assigned to each state. Without knowing those numbers you can only draw inferences that they are probably deontologists based on estimating the implausibility of the implied utility evaluations.

Mind you, the vast majority of people claiming to be utilitarian do seem to be deontologists, regardless of their verbal expressions to the contrary.

My reflex answer is that I should calculate the average amount of utils that I gain per unit of fun given to a random person and lose per unit of torture applied to a random person. I am not a true utilitarian, so this would be affected by the likelihood that the person I picked was of greater importance to me (causing a higher number of utils be gained/lost for fun/torture, respectively) than a random stranger.

Now, let's try to frame this somewhat quantitatively. Pretend that the world is divided into really happy people (RHP) who experience, by default, 150 Fun Units (funU) per month, happy people (HP) who experience 100 funU/mo by default, and sad people (SP) who experience only 50 funU/mo by default. The world is composed of .05 RHP, .7 HP, and .25 SP.For modeling purposes, being tortured means that you lose all of your fun, and then your fun comes back at a rate of 10%/mo. There aren't a significant number of people whom I attach greater-than-stranger importance to, so this doesn't actually affect the calculation much...except that I think that we have at least some chance of getting an FAI working, and Eliezer might be mentally damaged if he got tortured for a month. Were I actually faced with this choice, I would probably come up with a more accurate calculation, but I'll estimate that this factor causes me to arbitrarily bump up everyone's default fun values by 25funU/mo.

Fun lost for average RHP is 175+(175 x .9)+(175 x .8)+(175 x .7), and so on. Fun lost for average HP is 125+(125 x .9)+(125 x .8)+(125 x .7), and so on. Fun lost for average SP is 75+(75 x .9)+(75 x .8)+(75 x .7), and so on.

We average the three, weighting each by a factor of .05, .7, and .25, respectively, and get a number, expressed in funU. Anything higher than this would be my answer, and I predict that I would accept the offer regardless of how many people this funU was split amongst.

Edit: Font coding played havoc with my math.

I am not a true utilitarian, so this would be affected by the likelihood that the person I picked was of greater importance to me (causing a higher number of utils be gained/lost for fun/torture, respectively) than a random stranger.

You needn't value all people equally to be a true utilitarian, at least in the sense the word is used here.

...really happy people (RHP) who experience, by default, 150 Fun Units (funU) per month, happy people (HP) who experience 100 funU/mo by default, and sad people (SP) who experience only 50 funU/mo by default. ... being tortured means that you lose all of your fun...

I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured -3500 U.

You needn't value all people equally to be a true utilitarian, at least in the sense the word is used here.

Really? Is all I need to do to be a utilitarian is attach any amount of utility to other peoples' utility function and/or feelings?

I think you are seriously underestimating torture by supposing that the difference between really happy (top 5% level) and sad (bottom 25% level) is bigger than between sad and tortured. It should rather be something like: really happy 100 U, happy 70 U, sad 0 U, tortured -3500 U.

Uh, oops. I'm thinking that I could respond with this counterargument: "But 0 funU is really, really bad -- you're just sticking the really bad mark at -3500 while I'm sticking it at zero."

Sadly, the fact that I could make that sort of my remark reveals that I haven't actually made much of a claim at all in my post because I haven't defined what 1 funU is in real world terms. All I've really assumed is that funU is additive, which doesn't make much sense considering human psychology.

There goes that idea.

Is all I need to do to be a utilitarian is attach any amount of utility to other peoples' utility function and/or feelings?

Attach amounts of utility to possible states of the world. Otherwise no constraints. It is how utilitarianism is probably understood by most people here. Outside LessWrong, different definitions may be predominant.

"But 0 funU is really, really bad -- you're just sticking the really bad mark at -3500 while I'm sticking it at zero."

As you wish: so really happy 3600, happy 3570, sad 3500, tortured 0. Utility functions should be invariant with respect to additive or multiplicative constants. (Any monotonous transformation may work if done for the whole your utility function, but not for parts you are going to sum.) I was objecting to relative differences - in your original setting, assuming additivity (not wrong per se), moving one person from sad to very happy would balance moving two other people from sad to tortured. That seems obviously wrong.