(Still no Internet access.  Hopefully they manage to repair the DSL today.)

    I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet.  I used to be very confused about metaethics.  After my confusion finally cleared up, I did a postmortem on my previous thoughts.  I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless.  And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad".  Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.

    Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.

    Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition".  He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.

    Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed.

    I see the project of morality as a project of renormalizing intuition.  We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles.

    Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock.

    Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies.

    "Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from.  Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera.

    So that is "intuition".

    However, Gowder did not say what he meant by "utilitarianism".  Does utilitarianism say...

    1. That right actions are strictly determined by good consequences?
    2. That praiseworthy actions depend on justifiable expectations of good consequences?
    3. That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs?
    4. That virtuous actions always correspond to maximizing expected utility under some utility function?
    5. That two harmful events are worse than one?
    6. That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one?
    7. That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B?

    If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy.

    Now, what are the "intuitions" upon which my "utilitarianism" depends?

    This is a deepish sort of topic, but I'll take a quick stab at it.

    First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive".  Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.

    After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out.  This does not quite define moral progress, but it is how we experience moral progress.

    As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there?  (Could that be what Gowder means by saying I'm "utilitarian"?)

    The question of where a road goes - where it leads - you can answer by traveling the road and finding out.  If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.

    When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth.  You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error).

    But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place.

    Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up.  After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions.

    When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing...

    Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic:

    Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group.

    There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact.

    If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously.  So people are willing to pay more to help one child than to help eight.

    Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune.

    But what about the billions of other children in the world?  Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down?  How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417?

    Or you could look at that and say:  "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with.  But it ought to, normatively speaking."

    And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever.  You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities.  It's just that the brain doesn't goddamn multiply.  Quantities get thrown out the window.

    If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives.  Likewise if such choices are made by 10 different people, rather than the same person.  As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives.

    (It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways.  But the long run is a helpful intuition pump, so I am talking about it anyway.)

    The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time.

    Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe.  Three lives are one and one and one.  No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation.  And if you add another life you get 4 = 1 + 1 + 1 + 1.  That's aggregation.

    When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty.  It just goes to show that the brain doesn't goddamn multiply.

    The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency.  So you reflect, devise more trustworthy logics, and think it through in words.

    When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities.

    Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering".

    And part of it has to do with preferring unconditional social rules to conditional social rules.  Conditional rules seem weaker, seem more subject to manipulation.  If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

    So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it.  Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life".  Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

    The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise.  So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

    On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

    But you don't conclude that there are actually two tiers of utility with lexical ordering.  You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity.  You don't conclude that utilities must be expressed using hyper-real numbers.  Because the lower tier would simply vanish in any equation.  It would never be worth the tiniest effort to recalculate for it.  All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

    As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

    Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off.  When you reveal a value, you reveal a utility.

    I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

    But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

    Where music is concerned, I care about the journey.

    When lives are at stake, I shut up and multiply.

    It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math.

    And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

    </rant>

    New to LessWrong?

    New Comment
    204 comments, sorted by Click to highlight new comments since: Today at 2:15 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example? A proper utilitarian needs to consider the full range of outcomes--and their probabilities--associated with different alternatives. If the momentary eye irritation leads to a greater than 1/3^^^3 probability that someone will have an accident that leads to an outcome worse than 50 years of torture, then the dust specs are... (read more)

    Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example?

    No. I used a number large enough to make math unnecessary.

    I specified the dust specks had no distant consequences (no car crashes etc.) in the original puzzle.

    Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.

    I confess I didn't think of the suicide one, but I was very careful to choose an example that didn't involve actually killing anyone, because there someone was bound to point out that there was a greater-than-tiny probability that literal immortality is possible and would otherwise be available to that person.

    So I will specify only that the torture does not have any lasting consequences larger than a moderately sized galaxy, and then I'm done. Nothing bound by lightspeed limits in our material universe can morally outweigh 3^^^3 of anything noticeable. You'd ha... (read more)

    3bgaesop13y
    I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number." or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World." As a separate response, what's wrong with this calculation: I base my judgments largely on the duration of the disutility. After 1 second, the dust specks disappear and are forgotten, and so their disutility also disappears. The same is not true of the torture; the torture is therefore worse. I can foresee some possible problems with this line of thought, but it's 2:30 am in New Orleans and I just got done with a long evening of drinking and Joint Mathematics Meeting, so please forgive me if I don't attempt to formalize it now. An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0. Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.

    I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

    You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

    or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

    If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

    An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

    If utility ... (read more)

    3bgaesop13y
    Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles". Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales. Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model. Yeah, absolutely, I definitely agree with that.
    1kaz13y
    That would be failing, but 3^^^3 people blinking != you blinking. You just don't comprehend the size of 3^^^3. Well it's self evident that that's silly. So, there's that.
    6Douglas_Reay12y
    What about the consequences of the precedent set by the person making the decision that it is ok to torture an innocent person, in such circumstances? If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event? There's a rather good short story about this, by Ursula K LeGuin: The Ones Who Walk Away From Omelas

    If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event?

    Why would it?

    And I don't think LeGuin's story is good - it's classic LeGuin, by which I mean enthymematic, question-begging, emotive substitution for thought, which annoyed me so much that I wrote my own reply.

    8Alicorn12y
    I've read your story three times now and still don't know what's going on in it. Can I have it in the form of an explanation instead of a story?
    2gwern12y
    Sure, but you'll first have to provide an explanation of LeGuin's.
    3Alicorn12y
    There is this habitation called Omelas in which things are pretty swell for everybody except one kid who is kept in lousy conditions; by unspecified mechanism this is necessary for things to be pretty swell for everybody else in Omelas. Residents are told about the kid when they are old enough. Some of them do not approve of the arrangement and emigrate. Something of this form about your story will do.
    3gwern12y
    There is this city called Acre where things are pretty swell except for this one guy who has a lousy job; by a well-specified mechanism, his job makes him an accessary to murders which preserve the swell conditions. He understands all this and accepts the overwhelmingly valid moral considerations, but still feels guilty - in any human paradise, there will be a flaw.
    2Alicorn12y
    Since the mechanism is well-specified, can you specify it?
    1gwern12y
    I thought it was pretty clear in the story. It's not easy coming up with analogues to crypto, and there's probably holes in my lock scheme, but good enough for a story.
    3Alicorn12y
    Please explain it anyway. (It never goes well for me when I reply to this sort of thing with snark. So I edited away a couple of drafts of snark.)
    7[anonymous]12y
    It's a prediction market where the predictions (that we care about, anyway) are all of the form "I bet X that Y will die on date Z."
    3Alicorn12y
    Okay, and I imagine this would incentivize assassins, but how is this helping society be pretty swell for most people, and what is the one guy's job exactly? (Can you not bet on the deaths of arbitrary people, only people it is bad to have around? Is the one guy supposed to determine who it's bad to have around or something and only allow bets on those folks? How does he determine that, if so?)
    [-][anonymous]12y100

    Everything you'd want to know about assassination markets.

    but how is this helping society be pretty swell for most people, and what is the one guy's job exactly?

    Incentive to cooperate? A reduction in the necessity of war, which is by nature an inefficient use of resources? From the story:

    The wise men of that city had devised the practice when it became apparent to them that the endless clashes of armies on battlefields led to no lasting conclusion, nor did they extirpate the roots of the conflicts. Rather, they merely wasted the blood and treasure of the people. It was clear to them that those rulers led their people into death and iniquity, while remaining untouched themselves, lounging in comfort and luxury amidst the most crushing defeat.

    It was better that a few die before their time than the many. It was better that a little wealth go to the evil than much; better that conflicts be ended dishonorably once and for all, than fought honorably time and again; and better that peace be ill-bought than bought honestly at too high a price to be borne. So they thought.

    Moving on.

    (Can you not bet on the deaths of arbitrary people, only people it is bad to have around?

    Nope, &qu... (read more)

    2Alicorn12y
    I think I get it. I have worldbuilding disagreements with this but am no longer bewildered. Thank you!
    1pedanterrific12y
    So, I have some questions: how could you actually make money from this? It seems like the idea is that people place bets on the date that they're planning to assassinate the target themselves. So... where's the rest of the money come from, previous failed attempts? I'm not sure that "A whole bunch of guys tried to assassinate the president and got horribly slaughtered for their trouble. That means killing him'd make me rich! Where's my knife?" is a realistic train of thought.
    7[anonymous]12y
    The gamblers collect their winnings; the merchant of death charges a fee, presumably to compensate for the hypothetical legal liability and moral hazard. See the last quote from the story in grandparent. Or they want someone else to become more motivated to assassinate the target. It's not, because that's not how the information on how much a certain death is worth propagates. The assassination market needs to be at least semi-publicly observable -- in the story's case, the weight of the money in the named cylinder pulls it down, showing how much money is in the cylinder. If someone wanted a high-risk target, they'd have to offer more money to encourage the market to supply the service.
    2pedanterrific12y
    Ahh, that was the bit I missed. Okay, that makes sense now. Edit: Upon rereading, I think this could perhaps be a bit clearer. Cylinders hung suspended, okay. Held by cords leading into the "depths" - what? Holes by that cylinder- presumably in the wall or floor? The money goes into the locked treasure room, not the cylinder. And it causes (somehow) the cylinder to rise, not fall.
    3gwern12y
    The idea is that the room in the dungeons has two compartments which the two holes lead to: one contains the locks and predictions, and only the 'winning' lock is used when the person is assassinated (my offline analogue to crypto signatures), but the other just holds the money/rewards, and is actually a big cup or something held up by the cord which goes up to the ceiling, around a pulley, and then down to the cylinder. Hence, the more weight (money) inside the cup, the higher the cylinder is hoisted. I guess ropes and pulleys are no longer so common these days as to make the setup clear and not requiring further explanation? (This is one of the vulnerabilities as described - what's to stop someone from dumping in some lead? As I said, real-world equivalents to crypto are hard. Probably this could be solved by bringing in another human weak point - eg. specifying that only the merchants are allowed to put money in.)
    6kpreid12y
    The described pulley setup will simply accelerate until it reaches one limit or the other depending on the balance of weights. In order to have the position vary with the load, you need a position-varying force, such as * A spring. * A rotating off-center mass, as in a balance scale. (This is nonlinear for large angles.) * An asymmetric pulley, i.e. a cam (in the shape of an Archimedean spiral). * A tall object (of constant cross-section) entering a pool of water.

    "Omelas" contrasts the happiness of the citizens with the misery of the child. I couldn't tell from your story that the tradesman felt unusually miserable, nor that the other people of his city felt unusually happy. Nor do I know how this affects your reply to LeGuin, since I can't detect the reply.

    3NancyLebovitz12y
    For what it's worth, some people read "Omelas" as being about a superstition that torturing a child is necessary (see the bit about good weather) rather than a situation where torturing a child is actually contributing to public welfare.
    0gwern12y
    And the 'wisdom of their scholars' depends on the torture as well? 'terms' implies this is a magical contract of some sort. No mechanism, of course, like most magic and all of LeGuin's magic that I've read (Earthsea especially).
    0MileyCyrus12y
    America kills 20,000 people/yr via air pollution.. Are you ready to walk away?
    4thomblake12y
    It's worth noting, for 'number of people killed' statistics, that all of those people were going to die anyway, and many of them might have been about to die for some other reason. Society kills about 56 million people each year from spending resources on things other than solving the 'death' problem.
    9A1987dM12y
    Some of whom several decades later. (Loss of QALYs would be a better statistic, and I think it would be non-negligible.)
    1xkwwqjtw6y
    Please don’t build a machine that will torture me to save you from dust specks.
    -2TAG6y
    How confident are you that physics has anything to do with morality?
    0Miriorrery3y
    If it were that many dust specs in one person's eye, then the 50 years of torture would be reasonable, but getting dust specs in your eye doesn't cause lasting trauma, and it doesn't cause trauma to the people around you. Graham's number is big, yes, but all these people will go about their lives as if nothing happened afterwards- won't they? I feel like if someone were to choose torture for more than half a person's life for one person over everyone having a minor discomfort for a few moments, and everyone knew that the person had made the choice, everyone who knew would probably want absolutely nothing to do with that person. I feel like the length of the discomfort and how bad the discomfort is, ends up outweighing the number of times it happens, as long as it happens to different people and not the same person. The torture would have lasting consequences as well, and the dust specs wouldn't. I get your point and all, but I feel like dust specs compared to torture was a bad thing to use as an example.

    Favoring an unconditional social injunction against valuing money over lives is consistent with risking one's own life for money; you could reason that if trading off money and other people's lives is permitted at all, this power will be abused so badly that an unconditional injunction has the best expected consequences. I don't think this is true (because I don't think such an injunction is practical), but it's at least plausible.

    So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality. The "Shut up and multiply" mantra is simply a statement that your second preference is stronger than the first.

    In some ways it seems reasonable to define morality in a way that treats all people equally. If we do so, than our preference for multiplying can be more moral, by definition, than our less ... (read more)

    "Well, when you're dealing with a number like 3^^^3 in a thought experiment, you can toss out the event descriptions. If the thing being multiplied by 3^^^3 is good, it wins. If the thing being multiplied by 3^^^3 is bad, it loses. Period. End of discussion. There are no natural utility differences that large."

    Let's assume the eye-irritation lasts 1-second (with no further negative consequences). I would agree that 3^^^3 people suffering this 1-second irritation is 3^^^3-times worse than 1 person suffering thusly. But this irritation should not... (read more)

    3^^^3?

    http://www.overcomingbias.com/2008/01/protecting-acro.html#comment-97982570

    A 2% annual return adds up to a googol (10^100) return over 12,000 years

    Well, just to point out the obvious, there aren't nearly that many atoms in a 12,000 lightyear radius.

    Robin Hanson didn't get very close to 3^^^3 before you set limits on his use of "very very large numbers".

    Secondly, you refuse to put "death" on the same continuum as "mote in the eye", but behave sanctimoniously (example below) when people refuse to put "50 years ... (read more)

    The only reason Eliezer didn't put death on the same scale as the dust mote was on account of his condition that the dust specks have no further consequences. In real life, everything has consequences, and so in real life, death is on the same scale with everything else, including dust motes. Eliezer expressed this extremely well: "Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off."

    So yes, in real life there is some number of dust motes such that it would be better to prevent the dust storm than to save a life.

    A dust speck in the eye with no external ill effects was chosen as the largest non-zero negative utility. Torture, absent external effects (e.g., suicide), for any finite time, is a finite amount of negative utility. Death in a world of literal immortality cuts off an infinite amount of utility. There is a break in the continuum here.

    If you don't accept that dust specks are negative utility, you didn't follow the rules. Pick a new tiny ill effect (like a stubbed toe) and rethink the problem.

    If you still don't like it because for a given utility n, n + n !=... (read more)

    I assert the use of 3^^^3 in a moral argument is to avoid the effort of multiplying.

    Yes, that's what I said. If the quantities were close enough to have to multiply, the case would be open for debate even to utilitarians.

    Demonstration: what is 3^^^3 times 6?

    3^^^3, or as close as makes no difference.

    What is 3^^^3 times a trillion to the trillionth power?

    3^^^3, or as close as makes no difference.

    ...that's kinda the point.

    So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality.

    Er, no. One intuition is that I like to save lives - in fact, as many lives as possible, as reflected by my always preferring a larger number of lives saved to a smaller number. The other "intuition" is actually a complex compound of intuitions, that is, a rational verbal judgment, which enables me to appreciate that any non-aggregative decision-making will fail to lead to the consequence of saving as many lives as possible given bounded resources to save them.

    I'm feeling a bit of despair here... it seems that no... (read more)

    -8NancyLebovitz12y

    Correction: What I said: "one-second of irritation is less than 3^^^3-times as bad as the 50 years of torture." What I meant: "50 years of torture is more than 3^^^3-times as bad as 1-second of eye-irritation." Apologies for the mis-type (as well as for saying "you're" when I meant "your").

    But the point is, if there are no additional consequences to the suffering, then it's irrelevant. I don't care how many people experience the 1-second of suffering. There is no number large enough to make it matter.

    Eliezer had a... (read more)

    It's because something that's non-consequential is non-consequential

    The dust specks are consequential; people suffer because of them. The further negative consequences of torture are only finitely bad.

    Eliezer: would you torture a person for fifty years, if you lived in a large enough universe to contain 3^^^3 people, and if the omnipotent and omniscient ruler of that universe informed you that if you did not do so, he would carry out the dust-speck operation?

    Seriously, would you pick up the blow torch and use it for the rest of your life, for the sake of the dust-specks?

    8wedrifid11y
    Hey, that's an actual Pascal's Mugging! As opposed to "Pascal's generous offer that at worst can be refused for no negative consequences beyond the time spent listening to it". Come to think of it, we probably should be using "Pascal's Spam" for the exciting yet implausible offer.
    2Eliezer Yudkowsky11y
    Yeah, if we're going to bastardize the terms anyway, we should definitely distinguish Pascal's Spamming from Pascal's Mugging, where Spamming is any Mugging of a type that has a thousand easily generated variants without loss of plausibility ('plausibility' to a reasonable non-troll not committing the noncentral fallacy). (For emotional purposes, not decision-theory purposes.)

    Eliezer: it doesn't matter how big of a number you can write down. You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to. no matter how many people. That limit is far less than how bad a 50 year torture is.

    let f(x) = (5x - 1)/x what is f(3^^^3)? It's 5, or close enough that it doesn't matter.

    Eliezer: after wrestling with this for a while, I think I've identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, "3^^^3 isn't large enough" are off-base. If there's some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn't, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we're back to the original problem.

    For me, at least, the problem comes down to what 'preference' means. I don't think I h... (read more)

    I share El's despair. Look at the forest, folks. The point is that you have to recognize that harm aggregates (and not to an asymptote) or you are willing to do terrible things. The idea of torture is introduced precisely to make it hard to see. But it is important, particularly in light of how easily our brains fail to scale harm and benefit. Geez, I don't even have to look at the research El cites - the comments are enough.

    Stop saying the specks are "zero harm." This is a thought experiment and they are defined as positive harm.

    Stop saying that torture is different. This is a thought experiment and torture is defined to be absolutely terrible, but finite, harm.

    Stop saying that torture is infinite harm. That's just silly.

    Stop proving the point over and over in the comments!

    /rant/

    Not all harms aggregate, and in particular lots of nano-pains experienced by lots of sufferers aren't ontologically equivalent to a single agony experienced by a single subject. Utilitarianism isn't an objective fact about how the world works. There's an element of make-believe in treating all harms as aggregating. You can treat things that way if your intuitions tell you to, but the world doesn't force you to.

    This whole dust vs. torture "dilemma" depends on a couple assumptions: (1) That you can assign a cost to any event and that all such values lie within the same group (allowing multiples of one event to "add up" to another event) and (2) That the function that determines the cost of a certain number of a specific type of events does not have a hard upper limit (such as a logistic function). If either of these assumptions is wrong then the largeness of 3^^^3 or any other "large" number is totally irrelevant. One way to test (1) is to replace "torture" with "kill". If the answer is no then (1) is an invalidate assumption.

    Larry D'anna:

    You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to.

    By positing that dust-speck irritation aggregates non-linearly with respect to number of persons, and thereby precisely exemplifying the scope-insensitivity that Eliezer is talking about, you are not making an argument against his thesis; instead, you are merely providing an example of what he's warning against.

    You are in effect saying that as the number of persons increases, the marginal badness of the suffering of each new victim decreases. But why is it more of an offense to put a speck in the eye of Person #1 than Person #6873?

    Isn't there maybe a class insignificant harms where net utility is neutral or even positive (learn to squint or where goggles in a duststorm, learn that motes in ones eye are annoying but nothing really to worry about, increased unerstanding of Christian parables, eg; also consider schools of parenting that allow children to experiment with various behaviors that the parents would prefer they avoid, since directly experiencing the adverse event in a more controlled situation will prevent worse outcomes down the road). I'm not sure you can trust most people's expressed preference on this.

    That being said, I don't know where that class begins and ends.

    Bob: Sure, if you specify a disutility function that mandates lots-o'-specks to be worse than torture, decision theory will prefer torture. But that is literally begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That's where the actual moral reasoning goes. For instance, here's a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:

    U(T,S) = ST + S

    Freaking out about asymptotes reflects a basic misunderstan... (read more)

    Care to advance an argument, Caledonian? (Not saying I disagree... or agree, for that matter.)

    If harm aggregates less-than-linearly in general, then the difference between the harm caused by 6271 murders and that caused by 6270 is less than the difference between the harm caused by one murder and that caused by zero. That is, it is worse to put a dust mote in someone's eye if no one else has one, than it is if lots of other people have one.

    If relative utility is as nonlocal as that, it's entirely incalculable anyway. No one has any idea of how many beings are in the universe. It may be that murdering a few thousand people barely registers as harm, ... (read more)

    So what exactly do you multiply when you shut up and multiply? Can it be anything else then a function of the consequences? Because if it is a function of the consequences, you do believe or at least act as if believing your #4.

    In which case I still want an answer to my previously raised and unanswered point: As Arrow demonstrated a contradiction-free aggregate utility function derived from different individual utility functions is not possible. So either you need to impose uniform utility functions or your "normalization" of intuition leads to a logical contradiction - which is simple, because it is math.

    Neel Krishnaswami: you reference an article called "An Airtight Dutch Book," but I can't find it online without a subscription. Can you summarize the argument?

    Neel, I think you and I are looking at this as two different questions. I'm fine with bounded utility at the individual level, not so good with bounds on some aggregate utility measure across an unbounded population (but certainly willing to listen to a counter position), which is what we're talking about here. Now, what form an aggregate utility function should take is a legitimate question (although, as Salutator points out, unlikely to be a productive discussion), but I doubt that you would argue it should be bounded.

    I have really enjoyed following th... (read more)

    Peter DeBlanc: check your email

    The issue with a utility function U(T,S) = ST + S is that there is no motivation to have torture's utility depend on dust's utility. They are distinct and independent events, and in no way will additional specks worsen torture. If it is posited that dust specks asymptotically approach a bound lower than torture's bound, order issues present themselves and there should be rational preferences that place certain evils at such order that people should be unable to do anything but act to prevent those evils.

    There's additional problems here, like the idea that ... (read more)

    Sean: why is that "what utils do"? To the extent that we view utils as the semi-scientific concept from economics, they don't "just sum linearly." To economists utils don't sum at all; you can't make interpersonal comparisons of utility. So if you claim that utils sum linearly, you're making a claim of moral philosophy, and haven't argued for it terribly strongly.

    Sean, one problem is that people can't follow the arguments you suggest without these things being made explicit. So I'll try to do that:

    Suppose the badness of distributed dust specks approaches a limit, say 10 disutility units.

    On the other hand, let the badness of (a single case of ) 50 years of torture equal 10,000 disutility units. Then no number of dust specks will ever add up to the torture.

    But what about 49 years of torture distributed among many? Presumably people will not be willing to say that this approaches a limit less than 10,000; otherwise we would torture a trillion people for 49 years rather than one person for 50.

    So for the sake of definiteness, let 49 years of torture, repeatedly given to many, converge to a limit of 1,000,000 disutility units.

    48 years of torture, let's say, might converge to 980,000 disutility units, or whatever.

    Then since we can continuously decrease the pain until we reach the dust specks, there must be some pain that converges approximately to 10,000. Let's say that this is a stubbed toe.

    Three possibilities: it converges exactly to 10,000, to less than 10,000, or more than 10,000. If it converges to less, than if we choose another pain ever so... (read more)

    It seems to me that the dust-specks example depends on the following being true: both dust-specks and 50 years of torture can be precisely quantified.

    What is the justification for this belief? I find it hard to see any way of avoiding the conclusion that some harms may be compared, as in A < B (A=1 person/1 dustspeck, B=1 person/torture), but that does not imply that we can assign precise values to A and B and then determine how many A are equivalent to one B.

    Why do some people believe that we can precisely say how much worse the torture of 1 individual... (read more)

    Joseph,

    The point of using 3^^^3 is to avoid the need to assign precise values, which I agree seems impossible to do with any confidence. Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B. The objections have been that A=0, B is infinite, or the operation kA is not only nonlinear, but bounded. The first may be valid for specks but misses the point - just change it to "mild hangnail" or "banged funnybone." I cannot take ... (read more)

    "Renormalizing intuition" - that sounds like making sure all the intuitions are consistent and proportional to each other. Which is analogous to a coherence theory of truth as against a correspondence one. But you can make something as internally consistent as you like and maybe it still bears no relation to reality. It is necessary to know where the intuitions came from and what they mean.

    Ideas such as good and evil are abstract, and the mind of a newborn can't hold abstract ideas, only simple concretes. So those ideas can't have already been th... (read more)

    Bob: "The point of using 3^^^3 is to avoid the need to assign precise values".

    But then you are not facing up to the problems of your own ethical hypothesis. I insist that advocates of additive aggregation take seriously the problem of quantifying the exact ratio of badness between torture and speck-of-dust. The argument falls down if there is no such quantity, but how would you arrive at it, even in principle? I do not insist on an impersonally objective ratio of badness; we are talking about an idealized rational completion of one's personal pre... (read more)

    Mitchell, if I say an average second of the torture is about equal 10,000 distributed dust specks (notice I said "average second"; there is absolutely no claim that torture adds up linearly or anything like that), then something less than 2 trillion dust specks will be about equal to 50 years of torture. I would arrive at the ratio by some comparison of this sort, trying to guess how bad an average second is, and how many dust specks I would be willing to inflict to save a man from that amount of harm.

    Notice that 3^^^3 is completely unnecessary here. That's why I said previously that N doesn't have to be particularly large.

    Thanks for the explanations, Bob.

    Bob: The point of using 3^^^3 is to avoid the need to assign precise values... Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B.

    This still requires that they are commensurable though, which is what seeking a strong argument for. Saying that 3^^^3 dust specks in 3^^^3 eyes is greater harm than 50 years of torture means that they are commensurable and that whatever the utilities are, 3^^^3 specks divided by 50 ... (read more)

    A < B < C < D doesn't imply that there's some k such that kA>D

    Yes it does.

    Again, we return to the central issue:

    Why must we accept an additive model of harm to be rational?

    If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.

    (Not to mention that when normalized across people, utils are meant to provide direct and simple mathematical comparisons. In this case, it doesn't really matter how the normalization occurs as the inequality stands for any epsilon of dust-speck harm greater than zero.)

    Polling people to find if they will take a dust speck grants an external harm to the torture (e... (read more)

    There are no natural utility differences that large. (Eliezer, re 3^^^3)

    You've measured this with your utility meter, yes?

    If you mean that it's not possible for there to be a utility difference that large, because the smallest possible utility shift is the size of a single particle moving a planck distance, and the largest possible utility difference is the creation or destruction of the universe, and the scale between those two is smaller than 3^^^3 ... then you'll have to remind me again where all these 3^^^3 people that are getting dust specks in their ... (read more)

    If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.

    No. I can imagine non-additive harm evaluation systems where that is not the case.

    Even in the limited subset of systems where that IS the case, so what?

    Although the argument didn't depend on harm adding linearly in any case, it is true that two similar harms to two different people must be exactly twice as bad as one harm to one person.

    Many people on this thread have already given the reason: how bad it is that someone is killed, or tortured, or dust specked, obviously does not depend on how many other people this has happened to. Otherwise death couldn't be such a bad thing, since it has already happened to almost everyone.

    Sean, say you're one of the 3^^^3 voters. A vote one way potentially has torture on your conscience, as you say, but a vote the other way potentially has 3^^^3 dust specks on your conscience - by your definition a much greater sin. Square one - shut up and vote!

    I am aware that a billion people getting punched in the face appears to aggregate to a greater harm than one person being tortured for 50 years. (I should say that when I ask my intuition what it thinks about the number 1,000,000,000, it says 'WTF?', so it's not coming from there....) However, if I ... (read more)

    Ben, that's not about additivism, but indicates that you are a deontologist by nature, as everyone is. A better test: would you flip a lever which would stop the torture if everyone on earth would instantly be automatically punched in the face? I don't think I would.

    Joseph et al, I appreciate your thoughts. I think, though, that your objections keep coming back to "it's more complicated." And in reality it would be. But the simple thought experiment suggests that any realistic derivative of the specks question would likely get answered wrong because our (OUR!) intuition is heavily biased toward large (in aggregate) distributed harm. It appears that we personalize individual harm but not group harm.

    Ben, I assume that we would all vote that way, if only because the thought of having sentenced someone to to... (read more)

    If politics is the mind killer, morality is at least the mind masher. We should probably only talk about morality in small doses, interspersed with many other topics our minds can more easily manage.

    @Sean

    If your utility function u was replaced by 3u,there would be no observable difference in your behavior. So which of these functions is declared real and goes on to the interpersonal summing? "The same factor for everyone" isn't an answer, because if u_you doesn't equal u_me "the same factor" is simply meaningless.

    @Tomhs2

    A < B < C < D doesn't imply that there's some k such that kA>D
    Yes it does.

    I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the... (read more)

    0Polymeron13y
    Moral systems (at least, consistent ones with social consequences) deal in intentions, not actions per se. This is why, for instance, we find a difference between a bank teller giving away bank money to a robber at gun point, and a bank teller giving away money in order to get back at their employer. Same action, but the intent in question is different. A moral system interested only in actions would be indifferent to this distinction. Asking for a preference between two different states of affairs where uncertainty, ignorance and impotence are removed allows for an easy isolation of the intention component. Does this answer your question?

    Ben, that's not about additivism, but indicates that you are a deontologist by nature, as everyone is. A better test: would you flip a lever which would stop the torture if everyone on earth would instantly be automatically punched in the face? I don't think I would.

    I'm fairly certain I would pull the lever. And I'm certain that if I had to watch the person be tortured (or do it myself!) I would happily pull the lever.

    It was this sort of intuition that motivated my earlier question to Eliezer (which he still hasn't responded to). I'd be interested to hea... (read more)

    @Unknown
    So if everyone is a deontologist by nature, shouldn't a "normalization" of intuitions result in a deontological system of morals? If so, what makes you look for the right utilitarian system?

    I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the context the "<"-sign is mostly used in. But Orders can exist on sets other then sets of numbers. You can for example sort (order) the telephone book alphabetically, so that Cooper < Smith and still there is no k so that k*Cooper>Smith.

    This is fairly confusing...in the telephone book example, you haven't defined * as an operator. I frankly have no idea what you would mean by it. Using the notation kA > D implies a... (read more)

    Eisegates, is there no limit to the number of people you would subject to a punch in the face (very painful but temporary with no risk of death) in order to avoid the torture of one person? What if you personally had to do (at least some of) the punching? I agree that I might not be willing to personally commit the torture despite the terrible (aggregate) harm my refusal would bring, but I'm not proud of that fact - it seems selfish to me. And extrapolating your position seems to justify pretty terrible acts. It seems to me that the punch is equivalen... (read more)

    Eisegates, is there no limit to the number of people you would subject to a punch in the face (very painful but temporary with no risk of death) in order to avoid the torture of one person? What if you personally had to do (at least some of) the punching? I agree that I might not be willing to personally commit the torture despite the terrible (aggregate) harm my refusal would bring, but I'm not proud of that fact - it seems selfish to me. And extrapolating your position seems to justify pretty terrible acts. It seems to me that the punch is equivalent to ... (read more)

    Please let me interrupt this discussion on utilitarianism/humanism with an alternative perspective.

    I do not claim to know what the meaning of life is, but I can rule certain answers out. For example, I am highly certain that it is not to maximize the number of paperclips in my vicinity.

    I also believe it has nothing to do with how much pain or pleasure the humans experience -- or in fact anything to do with the humans.

    More broadly, I believe that although perhaps intelligent or ethical agents are somehow integral to the meaning of life, they are integral f... (read more)

    I think claims like "exactly twice as bad" are ill-defined.

    Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X, U' won't be.

    Bu... (read more)

    Krishnaswami: I think claims like "exactly twice as bad" are ill-defined. Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X,... (read more)

    "Polling people to find if they will take a dust speck grants an external harm to the torture (e.g., mental distress at the thought of someone being tortured)."

    TYPE ERROR

    This is what remains invariant under a positive affine transformation.

    (I haven't heard this pointed out anywhere, come to think, but surely it must have been observed before.)

    Didn't Marcello point that out to you a couple years ago?

    i got to tell you guys, a dust speck just flew in my eye, and man it was torture.

    I think I've found one of the factors (besides scope insensivity) involved in the intuitive choice: in real life, a small amount of harm inflicted n times to one person has negative side-effects which don't happen when you inflict it once to n persons. Even though there aren't any in this thought experiment, we are so used to it we probably take it into account (at least I did).

    Peter, I'm not sure what the chain of causality was there. (Let me know if I've previously written it down.) I think you or Nick Hay said that utility functions obey positive affine transformations, Marcello said that preserved the ratios of intervals, and I sketched out the interpretation for optimization problems.

    I just meant that I haven't seen it elsewhere in the Literature. You're right, I should have credited the Summer of AI group.

    Eisegetes, would you pull the lever if it would stop someone from being tortured for 50 years, but inflict one day of torture on each human being in the world? And if so, how about one year? or 10 years, or 25? In other words, the same problem arises as with the specks. Perhaps you can defend one punch per human being, but there must be some number of human beings for whom one punch each would outweigh torture.

    Salutator, I never said utilitarianism is completely true.

    Also: I wonder if Robin Hanson's comment shows concern about the lack of comments on his posts?

    Hmmm... What can we actually agree on?

    The disutility of a pain is a function of the Number of people who experience the pain, the Intensity of the pain, and the Time the pain lasts. It also an increasing function of all three: all else being equal, a pain experienced by more people is worse than one experienced by less people, a more intense pain is worse than a less intense pain, and a longer pain is worse than a shorter one. Or, more formally,

    U = f(N,I,T)

    ∂U/∂N > 0 (for I,T > 0)
    ∂U/∂I > 0 (for N,T > 0)
    ∂U/∂T > 0 (for N,I > 0)... (read more)

    Doug, I do not agree because my utility function depends on the identity of the people involved, not simply on N. Specifically, it might be possible for an agent to become confident that Bob is much more useful to whatever is the real meaning of life than Charlie is, in which case a harm to Bob has greater disutility in my system than a harm to Charlie. In other words, I do not consider egalitarianism to be a moral principle that applies to every situation without exception. So, for me U is not a function of (N,I,T)

    There seems to be an unexamined assumption here.

    Why should the moral weight of applying a specified harm to someone be independent of who it is?
    When making moral decisions, I tend to weight effects on my friends and family most heavily, then acquaintences, then fellow Americans, and so on. I value random strangers to some extent, but this is based more on arguments about the small size of the planet than true concern for their welfare.

    I claim that moral obligations must be reciprocal in order to exist. Altruism is never mandatory.

    None of Eliezer's 3^^... (read more)

    1: First of all, I want to acknowledge my belief that Eliezer's thought experiment is indeed usefuel, although it is "worse" than hypothetical. This is because it forces us to either face our psychological limitations when it comes to moral intuitions, or succumb to them (by arguing that the thought experiment is fundamentally unsound, in order to preserve harmony among our contradictive intuitions).
    2: Once we admit that our patchwork'o'rules'o'thumb moral intuitions are indeed contradictive, the question remains if he is actually right. In anot... (read more)

    Frank, re: #2: One can also believe option 4: that pleasure and pain have some moral significance, but do not perfectly determine moral outcomes. That is not necessarily irrational, it is not amoral, and it is not utilitarian. Indeed, I would posit that it represents the primary strand of all moral thinking and intuitions, so it is strange that it wasn't on your list.

    Unknown: 10 years and I would leave the lever alone, no doubt. 1 day is a very hard call; probably I would pull the lever. Most of us could get over 1 day over torture in a way that is fundamentally different from years of torture, after all.

    Perhaps you can defend one punch per human being, but there must be some number of human beings for whom one punch each would outweigh torture.

    As I said, I don't have that intuition. A punch is a fairly trivial harm. I doubt I would ever feel worse about a lot of people (even 5^^^^^^5) getting punched than about a ... (read more)

    Eisegetes: I admit your fourth option did not even enter my mind. I'll try (in a rather ad-hoc way) to dispute this on the grounds of computationalism. To be able to impose an order on conflicting options, it must be possible to reduced the combined expected outcomes (pleasure, displeasure, whatever else) into a single scalar value. Even if they are in some way lexically ordered, we can do this by projecting the lexical options onto non-intersecting intervals. Everything that is morally significant does, by virtue of the definition, enter into this calculus. Everything that doesn't, isn't.
    If you feel this does not apply, please help me by elaborating your objection.

    @Eisegates
    Yes, I was operating on the implicit convention, that true statements must be meaningfull, so I could also say there is no k, so that I have exactly k quobbelwocks.
    The nonexistence of a -operator (and of a +-operator) is actually the point. I don't think preferences of different persons can be meaningfully combined, and that includes, that {possible world-states} or {possible actions} don't, in your formulation, contain the sort of objects to which our everyday understanding of multiplication normally applies. Now if you insist on an intuitiv... (read more)

    Eliezer:

    no, Nick Hay and I were not involved at all. You mentioned this to us as something you and Marcello had discussed before the summer of AI.

    Frank, I think a utility function like that is a mathematical abstraction, and nothing more. People do not, in fact, have scalar-ordered ranked preferences across every possible hypothetical outcome. They are essentially indifferent between a wide range of choices. And anyway, I'm sure that there is sufficient agreement among moral agents to permit the useful aggregation of their varied, and sometimes conflicting, notions of what is preferable into a single useful metric. And even if we could do that, I'm not sure that such a function would correspond ... (read more)

    Salutator: thanks for clarifying. I would tend to think that physical facts like neural firings can be quite easily multiplied. I think the problem has less to do with the multiplying, than with the assumption that the number of neural firings is constitutive of wrongness.

    Eliezer: So when I say that two punches to two faces are twice as bad as one punch, I mean that if I would be willing to trade off the distance from the status quo to one punch in the face against a billionth (probability) of the distance between the status quo and one person being tortured for one week, then I would be willing to trade off the distance from the status quo to two people being punched in the face against a two-billionths probability of one person being tortured for one week.

    So alternatives that have twice the probability of some good thing ... (read more)

    Eisegetes: This is my third posting now, and I hope I will be forgiven by the powers that be...

    Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody's brain there must be a mechanism that decides which of the considered options wins the competition for "most moral option of the moment". Once the existence of this (personal) ordering is acknowledged (rationality), we can either disavow it (amorality) or try our best with what we have [always keeping in mind that the mechanisms at work are imp... (read more)

    me: A < B < C < D doesn't imply that there's some k such that kA>D

    Tomh: Yes it does.

    As Salutator stated, perhaps I should not have used the notation I did in my example. What I mean by '<' in the context of harms is "is preferred to". What I meant when I said that there was no k such that kA > D is that the notion of multiplication does not make sense when applied to "is preferred to". Perhaps I should not have used the notation I did. Apologies for the confusion.

    I looks to me like Eliezer plans to put humanism at the center of the intelligence explosion. I think that is a bad idea. I am horrified. I am appalled.

    I wouldn't worry about it if I were you. One of the worst cases of yang excess I've ever seen.

    Are you familiar with the concept of a Monkey Trap?

    When I write U(N,I,T), I was trying to refer to the preferences of the person being presented with the scenario; if the person being asked the question was a wicked sadist, he might prefer more suffering to less suffering. Specifically, I was trying to come up with a "least common denominator" list of relevant factors that can matter in this kind of scenario. Apparently "how close I am to the person who suffers the pain" is another significant factor in the preferences, at least for Richard.

    If we stipulate that, say, the pain is to be e... (read more)

    The answer is simple. If you accept the bounds of the dust-speck argument where there is no further consequence of the dust-speck beyond the moment of irritation, then the cost of the irritation cannot be distinguished from 0 cost. If I can be assured that an event will have no negative consequences in my life beyond the quality of a moment of experience, then I wouldn't even think that the event is worth my consideration. Utility = 0. Multiply any number by 0, and you get 0. The only way for the dust-speck to have negative utility is if it has some sort of impact on my life beyond that moment. The dust-speck argument can't work without violating its own assumptions. Torture is worse. Case closed.

    Adam, by that argument the torture is worth 0 as well, since after 1,000,000 years, no one will remember the torture or any of its consequences. So you should be entirely indifferent between the two, since each is worth zero.

    But I guess the utility could be considered to be non-0 and without further impact if some individual would choose for it not to happen to them. All else being equal, I would rather not have my eye irritated (even if I had no further consequences). And even if cost is super-astronomically small, Eliezer could think up a super-duper astronomically large number by which it could be multiplied. I guess he was right.
    I'm confused.
    I think I'm done.

    Richard Hollerith: "It looks to me like Eliezer plans to put humanism at the center of the intelligence explosion."

    "Renormalized" humanism, perhaps; the outcome of which need not be anthropocentric in any way. You are a human being, and you have come up with some non-anthropocentric value system for yourself. This more or less demonstrates that you can start with a human utility function and still produce such an outcome. But there is no point in trying to completely ditch human-specific preferences before doing anything else; if you did that, you wouldn't even be able to reject paperclip maximization.

    But you've changed the question.

    I've added a wildcard, certainly, but I haven't changed the game. Say I'm standing there, lever in hand. While I can't be certain, I can fairly safely assume that if I went person to person and asked, the vast majority of those 3^^^3 would be personally willing to suffer a dust speck to save one person's torture. So I'm not necessarily polling, I'm just conjecturing. With this in mind, I choose specks.

    [If I were to poll people, every now and then I would probably come across a Cold Hard Ratinoalist who said, "well, I'm ... (read more)

    Ben: suppose the lever has a continuous scale of values between 1 and 3^^^3. When the lever is set to 1, 1 person is being tortured (and the torture will last for 50 years.). If you set it to 2, two people will be tortured by an amount less the first person by 1/3^^^3 of the difference between the 50 years and a dust speck. If you set it to 3, three people will be tortured by an amount less than the first person by 2/3^^^3 of the difference between the 50 years and the dust speck. Naturally, if you pull the lever all the way to 3^^^3, that number of people... (read more)

    Unknown, that's a very interesting take indeed, and a good argument for Eliezer's proposition, but it doesn't say much about what to do if you can assume most of the 3^^^3 would ask for dust. Can you tell me what you would do purely in the context of my previous post?

    If you set it to 2, two people will be tortured by an amount less the first person by 1/3^^^3 of the difference between the 50 years and a dust speck.

    Of course not, this would be a no-brainer ratio for the lever to operate with. You should have said that position 2 on the lever tortures 2 peo... (read more)

    To your voting scenario: I vote to torture the terrorist who proposes this choice to everyone. In other words, asking each one personally, "Would you rather be dust specked or have someone randomly tortured?" would be much like a terrorist demanding $1 per person (from the whole world), otherwise he will kill someone. In this case, of course, one would kill the terrorist.

    I'm still thinking about the best way to set up the lever to make the point the most obvious.

    What if everyone would be willing to individually suffer 10 years of torture to spare the one person? Obviously it's not better to torture 3^^^3 people for 10 years than one person for 50 years.

    Obviously it's not better to torture 3^^^3 people for 10 years than one person for 50 years.

    Obviously? There's that word again.

    If it's really so obvious, please explain and elaborate on why it's not better.

    Ben, here's my new and improved lever. It has 7,625,597,484,987 settings. On setting 1, 1 person is tortured for 50 years plus the pain of one dust speck. On setting 2, 3 persons are tortured for 50 years minus the pain of (50-year torture/7,625,597,484,987), i.e. they are tortured for a minute fraction of a second less than 50 years, again plus the pain of one dust speck. On setting 3, 3^3 persons, i.e. 27 persons, are tortured for 50 years minus two such fractions of a second, plus the pain of one dust speck. On setting 4, 3^27, i.e. 7,625,597,484,987 pe... (read more)

    Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody's brain there must be a mechanism that decides which of the considered options wins the competition for "most moral option of the moment".

    That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. Thus, most people would not say that one is behaving immorally by volunteering at a soup kitc... (read more)

    It has 7,625,597,484,987 settings. On setting 1, 1 person is tortured for 50 years plus the pain of one dust speck. On setting 2, 3 persons are tortured for 50 years minus the pain of (50-year torture/7,625,597,484,987), i.e. they are tortured for a minute fraction of a second less than 50 years, again plus the pain of one dust speck. On setting 3, 3^3 persons, i.e. 27 persons, are tortured for 50 years minus two such fractions of a second, plus the pain of one dust speck. On setting 4, 3^27, i.e. 7,625,597,484,987 persons are tortured for 50 years minus 3... (read more)

    Btw, I got the 0.0002 constant by finding the number number of seconds in 50 years and dividing by 7,625,597,484,987 (assuming 365 days per year). It's rounded. The actual number is around 0.00020678.

    Ben:
    "but a vote the other way potentially has 3^^^3 dust specks on your conscience - by your definition a much greater sin. Square one - shut up and vote!"

    When presented with voting, each of the 3^^^3-1 people favored the dust specks (and their larger natural harm) to the torture (and its larger aggregated "mental distress"). The mental distress exists only on the basis of "sacred values". To say that in the face of 3^^^3-1 people preferring specks to torture, you should vote torture on the naive utility construction (no exte... (read more)

    Ben P: the arrangement of the scale is meant to show that the further you move the lever toward 3^^^3 dust specks, the worse things get. The torture decreases linearly simply because there's no reason to decrease it by more; the number of people increases in the way that it does because of the nature of 3^^^3 (i.e. the number is large enough to allow for this). The more we can increase it at each stop, the more obvious it is that we shouldn't move the lever at all, but rather we should leave it at torturing 1 person 50 years.

    The torture decreases linearly simply because there's no reason to decrease it by more; the number of people increases in the way that it does because of the nature of 3^^^3 (i.e. the number is large enough to allow for this)

    I don't see how that follows. Even the progression from the first setting to the second setting seems arbitrary. You've established a progression from one scenario (torturing a person for 50 years) to another (3^^^3 dust specks) but to me it just seems like one possible progression. I see no reason to set up the intermediate stages lik... (read more)

    My own anti-preference function seems to have a form something like this:

    U(N,I,T) = kI(1 - e^(-NT/a))
    where a and k are constants with appropriate units.

    Relevant "intuitions" not listed before:
    1) For the purposes of this thought experiment, who suffers a pain doesn't matter. Therefore:
    1a) Transferring an instant of pain from one person to another, without changing the (subjective) intensity of the pain, doesn't change the "badness" of the situation. Two people suffering torture for 25 years simultaneously equals one person suffering 2... (read more)

    Naturally the T(s) function I posted earlier was wrong. It should have been T(s)=1576800000-0.0002(s-1). However, that doesn't change my question.

    There is yet another angle on this dilemma which hasn't been raised yet. How bad is the outcome you are willing to prefer, in order to avoid those 3^^^3 dust specks? Are you willing to have the torture victim killed after the 50 years? How about all life on Earth? How about all life in the visible universe? I presume that truly convinced additivists will say yes in every case, because they "know" that 3^^^3 dust specks would still be incomprehensibly worse.

    Actually, I see Eliezer raised that issue back here.

    Notice that in Doug's function, suffering with intensity less than 0.393 can never add up to 50 years of torture, even when multiplied infinitely, while suffering of 0.394 will be worse than torture if it is sufficiently multiplied. So there is some number of 0.394 intensity pains such that no number of 0.393 intensity pains can ever be worse, despite the fact that these pains differ by 0.001, stipulated by Doug to be the pain of a dust speck. This is the conclusion that I pointed out follows with mathematical necessity from the position of those who prefer the specks.

    Doug, do you actually accept this conclusion (about the 0.393 and 0.394 pains), or you just trying to show that the position is not logically impossible?

    Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie ... (read more)

    Unknown, I'll bite. While you do point out some extremely counterintuitive consequences of positing that harms aggregate to an asymptote, accepting the dust specks as being worse than the torture is also extremely counterintuitive to most people.

    For the moment, I accept the asymptote position, including the mathematical necessity you've pointed out.

    So far this discussion has focused on harm to persons. But there are other forms of utility and disutility. Here's the intuition pump I used on myself: the person concept is not so atomic to resist quantificati... (read more)

    So there is some number of 0.394 intensity pains such that no number of 0.393 intensity pains can ever be worse, despite the fact that these pains differ by 0.001, stipulated by Doug to be the pain of a dust speck.

    Let's see just what that number is...

    0.394(1-e^(-NT/100 personyears) > 0.393
    1-e(-NT/100 person
    years) > 0.998

    e^(-NT/100 personyears) < 0.002538
    -NT/100 person
    years < -5.976
    NT > 597.6 person*years

    In terms of the constants, it comes out to NT > -a*ln(1-I1/I2), where I1 is the lesser pain and I2 is the greater pain. This does st... (read more)

    Richard, my understanding is that CEV is not democracy, not by design anyway. Think of any individual human being as a combination of some species-universal traits and some contingent individual properties. CEV, I would think, is about taking the preference-relevant cognitive universals and extrapolating an ideal moral agent relative to those. The contingent idiosyncrasies or limitations of particular human beings should not be a factor.

    At your website, you propose that "objective reality" is the locus of intrinsic value, sentient beings have on... (read more)

    Z.M. Davis, that's an interesting point about the slugs, I might get to it later. However, I suspect it has little to do with the torture and dust specks.

    Doug, here's another problem for your proposed function: according to your function, it doesn't matter whether a single person takes all the pain or if it is distributed, as long as it sums to the same amount according to your function.

    So let's suppose that the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture. According to this, would you hon... (read more)

    Utility doesn't aggregate. Neither human lives. You don't use 4, you have to use 1+1+1+1. If you aggregate human lives, you get diminishing marginal value for huma life/ Goverment does it. Millitary does it. You send a squad to suicide missoin to save the division. A la guerre com ala guerre. So I agree with Jadagul. Preference is a tricky subject , in which, there is always marginal utility.
    But since you used economic term of utility here is a simple economic question upon aggregate utility:
    You are the Goverment. You need to raise 1 Million $ for, le... (read more)

    Unknown, I think the slugs are relevant. I should think most of us would agree that all other things being equal, a world with less pain is better than one with more, and a world with more intelligent life is better than one with less.

    Defenders of SPECKS argue that the quality of pain absolutely matters: that the pain of no amount of dust specks could add up to that of torture. To do this, they must accept the awkward position that the badness of an experience partially depends on how many other people have suffered it. Defenders of TORTURE say, "Shut... (read more)

    Still haven't heard from even one proponent of TORTURE who would be willing to pick up the blowtorch themselves. Kind of casts doubt on the degree to which you really believe what you are asserting.

    I mean, perhaps it is the case that although picking up the blowtorch is ethically obligatory, you are too squeamish to do what is required. But that should be overrideable by a strong enough ethical imperative. (I don't know if I would pick up the blowtorch to save the life of one stranger, for instance, but I would feel compelled to do it to save the popu... (read more)

    About the slugs, there is nothing strange in asserting that the utility of the existence of something depends partly on what else exists. Consider chapters in a book: one chapter might be useless without the others, and one chapter repeated several times would actually add disutility.

    So I agree that a world with human beings in it is better than one with only slugs: but this says nothing about the torture and dust specks.

    Eisegetes, we had that discussion previously in regard to the difference between comparing actions and comparing outcomes. I am fairly su... (read more)

    Unknown,

    "So given an asymptote utility function (which I don't accept), it shouldn't matter if one more person is tortured for 50 years."

    With such an asymptotic utility function your calculations will be dominated by the possible worlds in which there are few other beings.

    I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it's known or not.

    You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam's Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias... (read more)

    Unknown, it seems like what you are doing is making a distinction between a particular action being obligatory -- you do not feel like you "ought" to torture someone -- and its outcome being preferable -- you feel like it would be better, all other things being equal, if you did torture the person.

    Is that correct? If it isn't, I have trouble seeing why the g64 variant of the problem wouldn't overcome your hesitation to torture. Or are you simply stating a deontological side-constraint -- I will never torture, period, not even to save the lives ... (read more)

    So let's suppose that the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture. According to this, would you honestly choose to suffer the solitary confinement for 3^^^3 years, rather than the 50 years torture?

    You've already defined the answer; "the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture." If that's so, then shouldn't I say yes?

    To some extent, my preferences do tell me to work on a "minimize the worst pain ... (read more)

    I understand that choosing specks theoretically leads to an overall decrease in happiness in the universe. One (irrational, given my previous conclusion) thought, however, always seems to dominate my interior monologue about specks vrs. torture - if someone were to ask me whether or not I would take a dust speck in the eye to save someone from 50 years of torture, I would do it (as I would expect most people to). I realize that I would have to take 3^^^3 dust specks for the problem to match the original question (and I wouldn't be willing to get 3^^^3 du... (read more)

    Phil, a sufficiently altruistic person would accept 25 years of torture to spare someone else 50, but that doesn't mean it's better to torture 3^^^3 people for 25 years (even if they're all willing) than one person for 50 years.

    If you call a utilitarian's utility function T, then you can pick the dust specks over torture if your utility function is -T.

    I'm taking the discussion with Richard to email; if it issues in anything I suppose it will end up on his website.

    Eisegetes (please excuse the delay):

    That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. [...] Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions -- in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember -- we are dealing here not with all possible brain states, but with all possible states of the porti... (read more)

    That's a confusion. I was explicitly talking of "moral" circuits.

    Well, that presupposes that we have some ability to distinguish between moral circuits and other circuits. To do that, you need some other criteria for what morality consists in than evolutionary imperatives, b/c all brain connections are at least partially caused by evolution. Ask yourself: what decision procedure would I articulate to justify to Eisegetes that the circuits responsible for regulating blinking, for creating feelings of hunger, or giving rise to sexual desire are,... (read more)

    Eisegetes:
    Well I (or you?) really maneuvered me into a tight spot here.
    About those options, you made a goot point.
    To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
    I don't know of any satisfiying definition of morality. I probably must involve actions that are neither taylored for personal nor inclusive fitness. I suppose the best I can come up with is "A moral action is one which you choose (== that makes you feel good) with... (read more)

    "'A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes.'"

    So using birth control is an inherently moral act? Overeating sweet and fatty foods to the point of damaging your health is an inherently moral act? Please. "Adaptation-executers," &c.

    ZMD:
    C'mon gimme a break, I said it's not satisfying!
    I get your point, but I dare you to come up with a meaningful but unassailable one-line definition of morality yourself!
    BTW birth control certainly IS moral, and overeating is just overdoing a beneficial adaption (i.e. eating).

    0orthonormal13y
    If that's what you see as the goal, then you didn't get his point. (Context, since the parent came before the OB-LW jump: Frank asserted that "A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes", and Z.M. Davis pointed out the flaws in that statement.)

    To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?

    Well, I would ask whether the decision in question is one that people (including me) normally refer to as a moral decision. "Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their societ... (read more)

    Eisegetes:
    "Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society.

    From a practical POV, without any ambitions to look under the hood, we can just draw this "ordinary language defense line", as I'd call it. Where it gets interesting from an Evolutionary Psychology POV is exactly those "inner shame/disgust/disapproval registers". Th... (read more)

    Is the disagreement about 4 simply because of timeless decision theory etc?

    Using a number big enough not to do the math is just a way of assigning 1 under any other name.

    3endoself12y
    Well, probabilities of 1 can be useful in thought experiments.

    Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.

    It seems to be an unsubstantiated slur on other moral systems :-(

    I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.

    There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.

    I assume, sin... (read more)

    2ArisKatsaris12y
    If we're weighing equally the lives of everyone, both guilty and innocent, and ignore other sideeffects, this reduces to: * if we execute him, 100% of one death * if we don't execute him, 45% chance of two deaths.
    0NancyLebovitz12y
    How big are the error bars on the odds that the murderer will kill two more people?
    0gRR12y
    Does it matter? The point is that (according to my morality computation) it is unfair to execute a 50%-probably innocent person, even though the "total number of saved lives" utility of this action may be greater than that of the alternative. And fairness of the procedure counts for something, even in terms of the "total number of saved lives".
    4pedanterrific12y
    So, let's say this hypothetical situation was put to you several times in sequence. The first time you decline on the basis of fairness, and the guy turns out to be innocent. Yay! The second time he walks out and murders three random people. Oops. After the hundredth time, you've saved fifty lives (because if the guy turns out to be a murderer you end up executing him anyway) and caused a hundred and thirty-five random people to be killed. Success?
    5gRR12y
    No :( Not when you put it like that... Do you conclude then that fairness worth zero human lives? Not even a 0.0000000001% probability of saving a life should be sacrificed for its sake? Maybe it's my example that was stupid and better ones exist.
    1orthonormal12y
    Upvoted for gracefully conceding a point. (EDIT: I mean, conceding the specific example, not necessarily the argument.) I think that fairness matters a lot, but a big chunk of the reason for that can be expressed in terms of further consequences: if the connection between crime and punishment becomes more random, then punishment stops working so well as a deterrent, and more people will commit murder. Being fair even when it's costly affects other people's decisions, not just the current case, and so a good consequentialist is very careful about fairness.
    0gRR12y
    I thought of trying to assume that fairness only matters when other people are watching. But then, in my (admittedly already discredited) example, wouldn't the solution be "release the man in front of everybody, but later kill him quietly. Or, even better, quietly administer a slow fatal poison before releasing?" Somehow, this is still unfair.
    2orthonormal12y
    Well, that gets into issues of decision theory, and my intuition is that if you're playing non-zero-sum games with other agents smart enough to deduce what you might think, it's often wise to be predictably fair/honest. (The idea you mention seems like "convince your partner to cooperate, then secretly defect", which only works if you're sure you can truly predict them and that they will falsely predict you. More often, it winds up as defect-defect.)
    3gRR12y
    Hmm. Decision theory and corresponding evolutionary advantages explain how the feelings and concepts of fairness/honesty first appeared. But now that they are already here, do we have to assume that these values are purely instrumental? Well, maybe. I'm less sure than before. But I'm still miles from relinquishing SPECKS :) EDIT: Understood your comment better after reading the articles. Love the PD-3 and rationalist ethical inequality, thanks!
    -1[anonymous]12y
    Instrumental to what? To providing "utility"? Concepts of fairness arose to enhance inclusive fitness, not utility. If these norms are only instrumental, then so are the norms of harm-avoidance that we're focused on. Since these norms often (but not always) "over-determine" action, it's easy to conceive of one of them explaining the other--so that, for example, fairness norms are seen as reifications of tactics for maximizing utility. But the empirical research indicates that people use at least five independent dimensions to make moral judgments: harm-avoidance, fairness, loyalty, respect, and purity. EY's program to "renormalize" morality assumes that our moral intuitions evolved to solve a function, but fall short because of design defects (relative to present needs). But it's more likely that they evolved to solved different problems of social living.
    1gRR12y
    I meant "instrumental values" as opposed to "terminal values", something valued as means to an end vs. something valued for its own sake. It is universally acknowledged that human life is a terminal value. Also, the "happiness" of said life, whatever that means. In your terms, these two would be the harm-avoidance dimension, I suppose. (Is it a good name?) Then, there are loyalty, respect, and purity, which I, for one, immediately reject as terminal values. And then, there is fairness, which is difficult. Intuitively, I would prefer to live in a universe which is more fair than in one which is less fair. But, if it would costs lives, quality and happiness of these lives, etc, then... unclear. Fortunately, orthonormal's article shows that if you take the long view, fairness doesn't really oppose the principal terminal value in the standard moral "examples", which (like mine) usually only look one short step ahead.
    1[anonymous]12y
    On the web site I linked to, the research suggests that for many people in our culture loyalty, purity, and respect are terminal values. Whether they're regarded as such or not seems a function of ideology, with liberals restricting morality to harm-avoidance and fairness. For myself, I have a hard time thinking of purity as a terminal value, but I definitely credit loyalty. I think it's worse to secretly wrong a friend who trusts you than a stranger. I suppose that's the sort of stance a utilitarian would want to talk me out of, but this seems a function of their societal vision rather than of moral intuition. Utilitarianism seems to me a bureaucrat's disease. The utilitarian asks what morality would make for the best society if everyone internalized it. From this perspective, the status of the fairness value is a hard problem: are you just concerned with total utility or does distribution matter--but my "intuition" is that fairness does matter because the guy at the bottom reaps no necessary benefit from increasing total utility (like the tortured guy in the SPECKS question). But again, this seems an ideological matter. But the question of which moral sytematization would produce the best society is an interesting question only for utopians. The "official" operative morality is a compromise between ideological pressures and basic moral intuitions. Truly "adopting" utilitiarianism as a society isn't an option: the further you deviate from moral intuition, the harder it is to get compliance. And what morality an individual person ought to adopt--that can't be a decision based on morality; rather, it should respond to prudential considerations.
    0gRR12y
    No, I don't think a consequentialist would want to talk you out of it. After all, the point is that loyalty is not a terminal value, not that it's not a value at all. Wronging a friend would immediately lead to much more unhappiness than wronging a stranger. And the long-term consequences of unloyal-to-friends policy would be a much lower quality of life.
    0gRR12y
    Right. Changed to "three random people".
    1Vladimir_Nesov12y
    It's not any computation. It's certainly not just what your brain does. What you actually observe is that your brain thinks certain thoughts, not that morality makes certain judgments. (I don't agree it's a "computation", but that is unimportant for this thread.)
    0gRR12y
    I understood the "computation" theory as: there's this abstract algorithm, approximately embedded in the unreliable hardware of my brain, and the morality judgments are its results, which are normally produced in the form of quick intuitions. But the algorithm is able to flexibly respond to arguments, etc. Then the observation of my brain thinking certain thoughts is how the algorithm feels from the inside. I think it is at least a useful metaphor. You disagree? Do you have an exposition of your views on this?
    0Vladimir_Nesov12y
    It's some evidence about what the algorithm judges, but not the algorithm itself. Humans make errors, while morality is the criterion of correctness of judgment, which can't be reliably observed by unaided eye, even if that's the best we have.

    Gowder did not say what he meant by "utilitarianism". Does utilitarianism say...

    1. That right actions are strictly determined by good consequences?
    2. That praiseworthy actions depend on justifiable expectations of good consequences?
    3. That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs?
    4. That virtuous actions always correspond to maximizing expected utility under some utility function?
    5. That two harmful events are worse t
    ... (read more)

    A link to Gowder's argument would be a good thing to have here. Never mind, I found it.

    Some of what you're saying here makes me think that the post about Nature vs. Nature (that might not be the exact title but it was something similar) would be more relevant to his argument. He might be contending that you're trying to use intuitions which presume utilitarianism to justify utilitarianism, but you're ignoring other intuitions such as scope insensitivity. Scope insensitivity is only a problem if we presume utilitarianism correct. If we presume scope insensi... (read more)

    I believe that the vast majority of people in the dust speck thought experiment would be very willing to endure the collision of the dust speck, if only to play a small role in saving a man from 50 years of torture. I would choose the dust specks on the behalf of those hurt by the dust specks, as I can be very close to certain that most of them would consent to it.

    A counterargument might be that, since 3^^^3 is such a vast number, the collective pain of the small fraction of people who would not consent to the dust speck still multiplies to be far larger t... (read more)

    0Dacyn6y
    The only people who would consent to the dust speck are people who would choose SPECKS over TORTURE in the first place. Are you really saying that you "do not value the comfort of" Eliezer, Robin, and others? However, your argument raises another interesting point, which is that the existence of people who would prefer that SPECKS was chosen over TORTURE, even if their preference is irrational, might change the outcome of the computation because it means that a choice of TORTURE amounts to violating their preferences. If TORTURE violates ~3^^^3 people's preferences, then perhaps it is after all a harm comparable to SPECKS. This would certainly be true if everyone finds out about whether SPECKS or TORTURE was chosen, in which case TORTURE makes it harder for a lot of people to sleep at night. On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking... Also, if SPECKS vs TORTURE decisions come up a lot in this hypothetical universe, then realistically people will only feel guilty over the first one.
    -3Bestbingoonlinesites6y
    Play Online game to earn money. http://www.bestbingoonlinesites.co.uk/top-bingo-sites.php http://www.bestbingoonlinesites.co.uk/free-poker-sites.php http://www.bestbingoonlinesites.co.uk/top-casino-sites.php http://www.bestbingoonlinesites.co.uk/best-bingo-games-online.php http://www.bestbingoonlinesites.co.uk/top-bingo-sites.php
    0g_pepper6y
    The argument that 50 years of torture of one person is preferable to 3^^^3 people suffering dust specs presumes utilitarianism. A non-utilitarian will not necessarily prefer torture to dust specs even if his/her critical thinking skills are up to par.
    0Dacyn6y
    I'm not a utilitarian. The argument that 50 years of torture is preferable to 3^^^3 people suffering dust specks only presumes that preferences are transitive, and that there exists a sequence of gradations between torture and dust specks with the properties that (A) N people suffering one level of the spectrum is always preferable to N*(a googol) people suffering the next level, and (B) the spectrum has at most a googol levels. I think it's pretty hard to consistently deny these assumptions, and I'm not aware of any serious argument put forth to deny them. It's true that a deontologist might refrain from torturing someone even if he believes it would result in the better outcome. I was assuming a scenario where either way you are not torturing someone, just refraining from preventing them from being tortured by someone else.
    0entirelyuseless6y
    Right. Utilitarianism is false, but Eliezer was still right about torture and dust specks.

    "It is more important that lives be saved, than that we conform to any particular ritual in saving them" is a major moral rule by itself, directly contradicted by, I believe, many if not most religions claiming to be sources of morality. "It does not matter that you saved more lives if you prayed to different gods/did not pray enough to ours" seems to be quite a repeating idea (also with gods replaced by political systems - advocates of Leninism tend to claim that capitalism is immoral despite having no Golodomor in its actual history).

    Among other things, if you try to violate “utilitarianism”, you run into paradoxes, contradictions, circular preferences, and other things that aren’t symptoms of moral wrongness so much as moral incoherence.

    Nobody seems to have problems with circular preferences in practice, probably because people's preferences aren't precise enough. So we don't have to adopt utilitarianism to fix this non-problem.

    But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority

    People aren't going to be doing ethical calculations using hyperrreal numbers, and they aren't going to be doing them with real numbers eithe... (read more)