bgaesop comments on The "Intuitions" Behind "Utilitarianism" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (193)
Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example?
No. I used a number large enough to make math unnecessary.
I specified the dust specks had no distant consequences (no car crashes etc.) in the original puzzle.
Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.
I confess I didn't think of the suicide one, but I was very careful to choose an example that didn't involve actually killing anyone, because there someone was bound to point out that there was a greater-than-tiny probability that literal immortality is possible and would otherwise be available to that person.
So I will specify only that the torture does not have any lasting consequences larger than a moderately sized galaxy, and then I'm done. Nothing bound by lightspeed limits in our material universe can morally outweigh 3^^^3 of anything noticeable. You'd have to leave our physics to do it.
You know how some people's brains toss out the numbers? Well, when you're dealing with a number like 3^^^3 in a thought experiment, you can toss out the event descriptions. If the thing being multiplied by 3^^^3 is good, it wins. If the thing being multiplied by 3^^^3 is bad, it loses. Period. End of discussion. There are no natural utility differences that large.
I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number." or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."
As a separate response, what's wrong with this calculation: I base my judgments largely on the duration of the disutility. After 1 second, the dust specks disappear and are forgotten, and so their disutility also disappears. The same is not true of the torture; the torture is therefore worse. I can foresee some possible problems with this line of thought, but it's 2:30 am in New Orleans and I just got done with a long evening of drinking and Joint Mathematics Meeting, so please forgive me if I don't attempt to formalize it now.
An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.
Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.
You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.
If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.
If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?
Eliezer's point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.
Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles".
Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.
Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.
Yeah, absolutely, I definitely agree with that.
That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3.
Well it's self evident that that's silly. So, there's that.