smoofra comments on Revisiting torture vs. dust specks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (64)
the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)
There is no additivity axiom for utility.
This is called the "proximity argument" in the post.
I've no idea how we're managing to have this discussion under a deleted submission. It shouldn't have even been posted to LW! It was live for about 30 seconds until I realized I clicked the wrong button.
It's in the feed now, and everyone subscribed will see it. You can not unpublish on the Internet! Can you somehow "undelete" it, I think it's fine enough a post.
Nope, I just tried pushing some buttons (edit, save, submit etc.) and it didn't work. Oh, boy. I created a secret area on LW!
Hmm. That should probably be posted to Known Issues...
What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.
If the utility function for torture were negative infinity:
et cetera.
In other words, I don't think this is a rational moral stance.
RobinZ, perhaps my understanding of the term utility differs from yours. In finance & economics, utility is a scalar (i.e., a real number) function u of wealth w, subject to:
u(w) is non-decreasing; u(w) is concave downward.
(Negative) singularities to the left are admissable.
I confess I don't know about the history of how the utility concept has been generalized to encompass pain and pleasure. It seems a multi-valued utility function might work better than a scalar function.
The criteria you mention don't exclude a negative singularity to the left, but when you attempt to optimize for maximum utility, the singularity causes problems. I was describing a few.
Edit: I mean to say: in the utilitarianism-utility function, which has multiple inputs.
I can envision a vector utility function u(x) = (a, b), where the ordering is on the first term a, unless there is a tie at negative infinity; in that case the ordering is on the second term b. b is -1 for one person-hour of minimal torture, and it's multiplicative in persons, duration and severity >= 1. (Pain infliction of less than 1 times minimal torture severity is not considered torture.) This solves your second objection, and the other two are features of this 'Just say no to torture' utility function.
Quote: -any choice with a nonzero probability of leading to torture gains infinite disutility, -any torture of any duration has the same disutility - infinite, -the criteria for torture vs. non-torture become rigid - something which is almost torture is literally infinitely better than something which is barely torture,
But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to "minimize the probability-times-intensity of torture", to which a reasonable answer might be, "set off a nuclear holocaust annihilating all life on the planet".
(And the distinction between torture and non-torture is - at least in the abstract - fuzzy. How much pain does it have to be to be torture?)
In real life or in this example? I don't believe this is true in real life.
There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I'm not sure whether "leading to torture" is the best way to phrase this, though.
What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn't matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone - you have to count it.
See Absolute certainty.
Proof left to the reader?
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn't additive enough to prefer that a much smaller number of people suffer a little bit more?
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings' strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren't perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.
For one thing, our moral intuitions don't shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn't necessary; a million is enough to take us far outside of our usual domain), it's important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.
Keep in mind that this isn't arguing for implementing Utilitarianism of the "kill a healthy traveler and harvest his organs to save 10 other people" variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it's being implemented. The circularity of "SPECKS" just serves to point out one more domain in which Eliezer's Maxim applies:
This came to mind: What you intuitively believe about a certain statement may as well be described as an "emotion" of "truthiness", triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn't always right, estimate of plausibility isn't always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.
Just give up already! Intuition isn't always right!
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno's Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a 'infliction of discomfort' scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function - it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it's always torture, then you never get to dust specks.
Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable - I believe the original post had some discussion on hyperreal utilities and the like - but the scheme looks a little contrived to me.
To me, a utility function is a contrivance. So it's OK if it's contrived. It's a map, not the territory, as illustrated above.
I take someone's answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
Orthonormal, you're rehashing things I've covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the "proximity argument") do have a specific step where the derivative becomes negative.
What's more, that fact doesn't look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?