Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example? A proper utilitarian needs to consider the full range of outcomes--and their probabilities--associated with different alternatives. If the momentary eye irritation leads to a greater than 1/3^^^3 probability that someone will have an accident that leads to an outcome worse than 50 years of torture, then the dust specs are...
Eliezer, to be clear, do you still think that 3^^^3 people having momentary eye irritations--from dust-specs--is worth torturing a single person for 50 years, or is there a possibility that you did the math incorrectly for that example?
No. I used a number large enough to make math unnecessary.
I specified the dust specks had no distant consequences (no car crashes etc.) in the original puzzle.
Unless the torture somehow causes Vast consequences larger than the observable universe, or the suicide of someone who otherwise would have been literally immortal, it doesn't matter whether the torture has distant consequences or not.
I confess I didn't think of the suicide one, but I was very careful to choose an example that didn't involve actually killing anyone, because there someone was bound to point out that there was a greater-than-tiny probability that literal immortality is possible and would otherwise be available to that person.
So I will specify only that the torture does not have any lasting consequences larger than a moderately sized galaxy, and then I'm done. Nothing bound by lightspeed limits in our material universe can morally outweigh 3^^^3 of anything noticeable. You'd ha...
I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."
You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.
or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."
If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.
An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.
If utility ...
If such actions get officially endorsed as being moral, isn't that going to have consequences which mean the torture won't be a one-off event?
Why would it?
And I don't think LeGuin's story is good - it's classic LeGuin, by which I mean enthymematic, question-begging, emotive substitution for thought, which annoyed me so much that I wrote my own reply.
Everything you'd want to know about assassination markets.
but how is this helping society be pretty swell for most people, and what is the one guy's job exactly?
Incentive to cooperate? A reduction in the necessity of war, which is by nature an inefficient use of resources? From the story:
The wise men of that city had devised the practice when it became apparent to them that the endless clashes of armies on battlefields led to no lasting conclusion, nor did they extirpate the roots of the conflicts. Rather, they merely wasted the blood and treasure of the people. It was clear to them that those rulers led their people into death and iniquity, while remaining untouched themselves, lounging in comfort and luxury amidst the most crushing defeat.
It was better that a few die before their time than the many. It was better that a little wealth go to the evil than much; better that conflicts be ended dishonorably once and for all, than fought honorably time and again; and better that peace be ill-bought than bought honestly at too high a price to be borne. So they thought.
Moving on.
(Can you not bet on the deaths of arbitrary people, only people it is bad to have around?
Nope, &qu...
"Omelas" contrasts the happiness of the citizens with the misery of the child. I couldn't tell from your story that the tradesman felt unusually miserable, nor that the other people of his city felt unusually happy. Nor do I know how this affects your reply to LeGuin, since I can't detect the reply.
Favoring an unconditional social injunction against valuing money over lives is consistent with risking one's own life for money; you could reason that if trading off money and other people's lives is permitted at all, this power will be abused so badly that an unconditional injunction has the best expected consequences. I don't think this is true (because I don't think such an injunction is practical), but it's at least plausible.
So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality. The "Shut up and multiply" mantra is simply a statement that your second preference is stronger than the first.
In some ways it seems reasonable to define morality in a way that treats all people equally. If we do so, than our preference for multiplying can be more moral, by definition, than our less ...
"Well, when you're dealing with a number like 3^^^3 in a thought experiment, you can toss out the event descriptions. If the thing being multiplied by 3^^^3 is good, it wins. If the thing being multiplied by 3^^^3 is bad, it loses. Period. End of discussion. There are no natural utility differences that large."
Let's assume the eye-irritation lasts 1-second (with no further negative consequences). I would agree that 3^^^3 people suffering this 1-second irritation is 3^^^3-times worse than 1 person suffering thusly. But this irritation should not...
3^^^3?
http://www.overcomingbias.com/2008/01/protecting-acro.html#comment-97982570
A 2% annual return adds up to a googol (10^100) return over 12,000 years
Well, just to point out the obvious, there aren't nearly that many atoms in a 12,000 lightyear radius.
Robin Hanson didn't get very close to 3^^^3 before you set limits on his use of "very very large numbers".
Secondly, you refuse to put "death" on the same continuum as "mote in the eye", but behave sanctimoniously (example below) when people refuse to put "50 years ...
The only reason Eliezer didn't put death on the same scale as the dust mote was on account of his condition that the dust specks have no further consequences. In real life, everything has consequences, and so in real life, death is on the same scale with everything else, including dust motes. Eliezer expressed this extremely well: "Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off."
So yes, in real life there is some number of dust motes such that it would be better to prevent the dust storm than to save a life.
A dust speck in the eye with no external ill effects was chosen as the largest non-zero negative utility. Torture, absent external effects (e.g., suicide), for any finite time, is a finite amount of negative utility. Death in a world of literal immortality cuts off an infinite amount of utility. There is a break in the continuum here.
If you don't accept that dust specks are negative utility, you didn't follow the rules. Pick a new tiny ill effect (like a stubbed toe) and rethink the problem.
If you still don't like it because for a given utility n, n + n !=...
I assert the use of 3^^^3 in a moral argument is to avoid the effort of multiplying.
Yes, that's what I said. If the quantities were close enough to have to multiply, the case would be open for debate even to utilitarians.
Demonstration: what is 3^^^3 times 6?
3^^^3, or as close as makes no difference.
What is 3^^^3 times a trillion to the trillionth power?
3^^^3, or as close as makes no difference.
...that's kinda the point.
So it seems you have two intuitions. One is that you like certain kinds of "feel good" feedback that aren't necessarily mathematically proportional to the quantifiable consequences. Another is that you like mathematical proportionality.
Er, no. One intuition is that I like to save lives - in fact, as many lives as possible, as reflected by my always preferring a larger number of lives saved to a smaller number. The other "intuition" is actually a complex compound of intuitions, that is, a rational verbal judgment, which enables me to appreciate that any non-aggregative decision-making will fail to lead to the consequence of saving as many lives as possible given bounded resources to save them.
I'm feeling a bit of despair here... it seems that no...
Correction: What I said: "one-second of irritation is less than 3^^^3-times as bad as the 50 years of torture." What I meant: "50 years of torture is more than 3^^^3-times as bad as 1-second of eye-irritation." Apologies for the mis-type (as well as for saying "you're" when I meant "your").
But the point is, if there are no additional consequences to the suffering, then it's irrelevant. I don't care how many people experience the 1-second of suffering. There is no number large enough to make it matter.
Eliezer had a...
It's because something that's non-consequential is non-consequential
The dust specks are consequential; people suffer because of them. The further negative consequences of torture are only finitely bad.
Eliezer: would you torture a person for fifty years, if you lived in a large enough universe to contain 3^^^3 people, and if the omnipotent and omniscient ruler of that universe informed you that if you did not do so, he would carry out the dust-speck operation?
Seriously, would you pick up the blow torch and use it for the rest of your life, for the sake of the dust-specks?
Eliezer: it doesn't matter how big of a number you can write down. You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to. no matter how many people. That limit is far less than how bad a 50 year torture is.
let f(x) = (5x - 1)/x what is f(3^^^3)? It's 5, or close enough that it doesn't matter.
Eliezer: after wrestling with this for a while, I think I've identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, "3^^^3 isn't large enough" are off-base. If there's some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn't, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we're back to the original problem.
For me, at least, the problem comes down to what 'preference' means. I don't think I h...
I share El's despair. Look at the forest, folks. The point is that you have to recognize that harm aggregates (and not to an asymptote) or you are willing to do terrible things. The idea of torture is introduced precisely to make it hard to see. But it is important, particularly in light of how easily our brains fail to scale harm and benefit. Geez, I don't even have to look at the research El cites - the comments are enough.
Stop saying the specks are "zero harm." This is a thought experiment and they are defined as positive harm.
Stop saying that torture is different. This is a thought experiment and torture is defined to be absolutely terrible, but finite, harm.
Stop saying that torture is infinite harm. That's just silly.
Stop proving the point over and over in the comments!
/rant/
Not all harms aggregate, and in particular lots of nano-pains experienced by lots of sufferers aren't ontologically equivalent to a single agony experienced by a single subject. Utilitarianism isn't an objective fact about how the world works. There's an element of make-believe in treating all harms as aggregating. You can treat things that way if your intuitions tell you to, but the world doesn't force you to.
This whole dust vs. torture "dilemma" depends on a couple assumptions: (1) That you can assign a cost to any event and that all such values lie within the same group (allowing multiples of one event to "add up" to another event) and (2) That the function that determines the cost of a certain number of a specific type of events does not have a hard upper limit (such as a logistic function). If either of these assumptions is wrong then the largeness of 3^^^3 or any other "large" number is totally irrelevant. One way to test (1) is to replace "torture" with "kill". If the answer is no then (1) is an invalidate assumption.
Larry D'anna:
You are dealing with an asymptote. There is a limit to how bad momentary eye-irritation can be, no matter how many people it happens to.
By positing that dust-speck irritation aggregates non-linearly with respect to number of persons, and thereby precisely exemplifying the scope-insensitivity that Eliezer is talking about, you are not making an argument against his thesis; instead, you are merely providing an example of what he's warning against.
You are in effect saying that as the number of persons increases, the marginal badness of the suffering of each new victim decreases. But why is it more of an offense to put a speck in the eye of Person #1 than Person #6873?
Isn't there maybe a class insignificant harms where net utility is neutral or even positive (learn to squint or where goggles in a duststorm, learn that motes in ones eye are annoying but nothing really to worry about, increased unerstanding of Christian parables, eg; also consider schools of parenting that allow children to experiment with various behaviors that the parents would prefer they avoid, since directly experiencing the adverse event in a more controlled situation will prevent worse outcomes down the road). I'm not sure you can trust most people's expressed preference on this.
That being said, I don't know where that class begins and ends.
Bob: Sure, if you specify a disutility function that mandates lots-o'-specks to be worse than torture, decision theory will prefer torture. But that is literally begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That's where the actual moral reasoning goes. For instance, here's a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:
U(T,S) = ST + S
Freaking out about asymptotes reflects a basic misunderstan...
If harm aggregates less-than-linearly in general, then the difference between the harm caused by 6271 murders and that caused by 6270 is less than the difference between the harm caused by one murder and that caused by zero. That is, it is worse to put a dust mote in someone's eye if no one else has one, than it is if lots of other people have one.
If relative utility is as nonlocal as that, it's entirely incalculable anyway. No one has any idea of how many beings are in the universe. It may be that murdering a few thousand people barely registers as harm, ...
So what exactly do you multiply when you shut up and multiply? Can it be anything else then a function of the consequences? Because if it is a function of the consequences, you do believe or at least act as if believing your #4.
In which case I still want an answer to my previously raised and unanswered point: As Arrow demonstrated a contradiction-free aggregate utility function derived from different individual utility functions is not possible. So either you need to impose uniform utility functions or your "normalization" of intuition leads to a logical contradiction - which is simple, because it is math.
Neel Krishnaswami: you reference an article called "An Airtight Dutch Book," but I can't find it online without a subscription. Can you summarize the argument?
Neel, I think you and I are looking at this as two different questions. I'm fine with bounded utility at the individual level, not so good with bounds on some aggregate utility measure across an unbounded population (but certainly willing to listen to a counter position), which is what we're talking about here. Now, what form an aggregate utility function should take is a legitimate question (although, as Salutator points out, unlikely to be a productive discussion), but I doubt that you would argue it should be bounded.
I have really enjoyed following th...
The issue with a utility function U(T,S) = ST + S is that there is no motivation to have torture's utility depend on dust's utility. They are distinct and independent events, and in no way will additional specks worsen torture. If it is posited that dust specks asymptotically approach a bound lower than torture's bound, order issues present themselves and there should be rational preferences that place certain evils at such order that people should be unable to do anything but act to prevent those evils.
There's additional problems here, like the idea that ...
Sean: why is that "what utils do"? To the extent that we view utils as the semi-scientific concept from economics, they don't "just sum linearly." To economists utils don't sum at all; you can't make interpersonal comparisons of utility. So if you claim that utils sum linearly, you're making a claim of moral philosophy, and haven't argued for it terribly strongly.
Sean, one problem is that people can't follow the arguments you suggest without these things being made explicit. So I'll try to do that:
Suppose the badness of distributed dust specks approaches a limit, say 10 disutility units.
On the other hand, let the badness of (a single case of ) 50 years of torture equal 10,000 disutility units. Then no number of dust specks will ever add up to the torture.
But what about 49 years of torture distributed among many? Presumably people will not be willing to say that this approaches a limit less than 10,000; otherwise we would torture a trillion people for 49 years rather than one person for 50.
So for the sake of definiteness, let 49 years of torture, repeatedly given to many, converge to a limit of 1,000,000 disutility units.
48 years of torture, let's say, might converge to 980,000 disutility units, or whatever.
Then since we can continuously decrease the pain until we reach the dust specks, there must be some pain that converges approximately to 10,000. Let's say that this is a stubbed toe.
Three possibilities: it converges exactly to 10,000, to less than 10,000, or more than 10,000. If it converges to less, than if we choose another pain ever so...
It seems to me that the dust-specks example depends on the following being true: both dust-specks and 50 years of torture can be precisely quantified.
What is the justification for this belief? I find it hard to see any way of avoiding the conclusion that some harms may be compared, as in A < B (A=1 person/1 dustspeck, B=1 person/torture), but that does not imply that we can assign precise values to A and B and then determine how many A are equivalent to one B.
Why do some people believe that we can precisely say how much worse the torture of 1 individual...
Joseph,
The point of using 3^^^3 is to avoid the need to assign precise values, which I agree seems impossible to do with any confidence. Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B. The objections have been that A=0, B is infinite, or the operation kA is not only nonlinear, but bounded. The first may be valid for specks but misses the point - just change it to "mild hangnail" or "banged funnybone." I cannot take ...
"Renormalizing intuition" - that sounds like making sure all the intuitions are consistent and proportional to each other. Which is analogous to a coherence theory of truth as against a correspondence one. But you can make something as internally consistent as you like and maybe it still bears no relation to reality. It is necessary to know where the intuitions came from and what they mean.
Ideas such as good and evil are abstract, and the mind of a newborn can't hold abstract ideas, only simple concretes. So those ideas can't have already been th...
Bob: "The point of using 3^^^3 is to avoid the need to assign precise values".
But then you are not facing up to the problems of your own ethical hypothesis. I insist that advocates of additive aggregation take seriously the problem of quantifying the exact ratio of badness between torture and speck-of-dust. The argument falls down if there is no such quantity, but how would you arrive at it, even in principle? I do not insist on an impersonally objective ratio of badness; we are talking about an idealized rational completion of one's personal pre...
Mitchell, if I say an average second of the torture is about equal 10,000 distributed dust specks (notice I said "average second"; there is absolutely no claim that torture adds up linearly or anything like that), then something less than 2 trillion dust specks will be about equal to 50 years of torture. I would arrive at the ratio by some comparison of this sort, trying to guess how bad an average second is, and how many dust specks I would be willing to inflict to save a man from that amount of harm.
Notice that 3^^^3 is completely unnecessary here. That's why I said previously that N doesn't have to be particularly large.
Thanks for the explanations, Bob.
Bob: The point of using 3^^^3 is to avoid the need to assign precise values... Once you accept the premise that A is less than B (with both being finite and nonzero), you need to accept that there exists some number k where kA is greater than B.
This still requires that they are commensurable though, which is what seeking a strong argument for. Saying that 3^^^3 dust specks in 3^^^3 eyes is greater harm than 50 years of torture means that they are commensurable and that whatever the utilities are, 3^^^3 specks divided by 50 ...
Again, we return to the central issue:
Why must we accept an additive model of harm to be rational?
If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.
(Not to mention that when normalized across people, utils are meant to provide direct and simple mathematical comparisons. In this case, it doesn't really matter how the normalization occurs as the inequality stands for any epsilon of dust-speck harm greater than zero.)
Polling people to find if they will take a dust speck grants an external harm to the torture (e...
There are no natural utility differences that large. (Eliezer, re 3^^^3)
You've measured this with your utility meter, yes?
If you mean that it's not possible for there to be a utility difference that large, because the smallest possible utility shift is the size of a single particle moving a planck distance, and the largest possible utility difference is the creation or destruction of the universe, and the scale between those two is smaller than 3^^^3 ... then you'll have to remind me again where all these 3^^^3 people that are getting dust specks in their ...
If you don't accept the additivity of harm, you accept that for any harm x, there is some number of people y where 2^y people suffering x harm is the same welfare wise as y people suffering x harm.
No. I can imagine non-additive harm evaluation systems where that is not the case.
Even in the limited subset of systems where that IS the case, so what?
Although the argument didn't depend on harm adding linearly in any case, it is true that two similar harms to two different people must be exactly twice as bad as one harm to one person.
Many people on this thread have already given the reason: how bad it is that someone is killed, or tortured, or dust specked, obviously does not depend on how many other people this has happened to. Otherwise death couldn't be such a bad thing, since it has already happened to almost everyone.
Sean, say you're one of the 3^^^3 voters. A vote one way potentially has torture on your conscience, as you say, but a vote the other way potentially has 3^^^3 dust specks on your conscience - by your definition a much greater sin. Square one - shut up and vote!
I am aware that a billion people getting punched in the face appears to aggregate to a greater harm than one person being tortured for 50 years. (I should say that when I ask my intuition what it thinks about the number 1,000,000,000, it says 'WTF?', so it's not coming from there....) However, if I ...
Ben, that's not about additivism, but indicates that you are a deontologist by nature, as everyone is. A better test: would you flip a lever which would stop the torture if everyone on earth would instantly be automatically punched in the face? I don't think I would.
Joseph et al, I appreciate your thoughts. I think, though, that your objections keep coming back to "it's more complicated." And in reality it would be. But the simple thought experiment suggests that any realistic derivative of the specks question would likely get answered wrong because our (OUR!) intuition is heavily biased toward large (in aggregate) distributed harm. It appears that we personalize individual harm but not group harm.
Ben, I assume that we would all vote that way, if only because the thought of having sentenced someone to to...
If politics is the mind killer, morality is at least the mind masher. We should probably only talk about morality in small doses, interspersed with many other topics our minds can more easily manage.
@Sean
If your utility function u was replaced by 3u,there would be no observable difference in your behavior. So which of these functions is declared real and goes on to the interpersonal summing? "The same factor for everyone" isn't an answer, because if u_you doesn't equal u_me "the same factor" is simply meaningless.
@Tomhs2
A < B < C < D doesn't imply that there's some k such that kA>DYes it does.
I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the...
Ben, that's not about additivism, but indicates that you are a deontologist by nature, as everyone is. A better test: would you flip a lever which would stop the torture if everyone on earth would instantly be automatically punched in the face? I don't think I would.
I'm fairly certain I would pull the lever. And I'm certain that if I had to watch the person be tortured (or do it myself!) I would happily pull the lever.
It was this sort of intuition that motivated my earlier question to Eliezer (which he still hasn't responded to). I'd be interested to hea...
@Unknown
So if everyone is a deontologist by nature, shouldn't a "normalization" of intuitions result in a deontological system of morals? If so, what makes you look for the right utilitarian system?
I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the context the "<"-sign is mostly used in. But Orders can exist on sets other then sets of numbers. You can for example sort (order) the telephone book alphabetically, so that Cooper < Smith and still there is no k so that k*Cooper>Smith.
This is fairly confusing...in the telephone book example, you haven't defined * as an operator. I frankly have no idea what you would mean by it. Using the notation kA > D implies a...
Eisegates, is there no limit to the number of people you would subject to a punch in the face (very painful but temporary with no risk of death) in order to avoid the torture of one person? What if you personally had to do (at least some of) the punching? I agree that I might not be willing to personally commit the torture despite the terrible (aggregate) harm my refusal would bring, but I'm not proud of that fact - it seems selfish to me. And extrapolating your position seems to justify pretty terrible acts. It seems to me that the punch is equivalen...
Eisegates, is there no limit to the number of people you would subject to a punch in the face (very painful but temporary with no risk of death) in order to avoid the torture of one person? What if you personally had to do (at least some of) the punching? I agree that I might not be willing to personally commit the torture despite the terrible (aggregate) harm my refusal would bring, but I'm not proud of that fact - it seems selfish to me. And extrapolating your position seems to justify pretty terrible acts. It seems to me that the punch is equivalent to ...
Please let me interrupt this discussion on utilitarianism/humanism with an alternative perspective.
I do not claim to know what the meaning of life is, but I can rule certain answers out. For example, I am highly certain that it is not to maximize the number of paperclips in my vicinity.
I also believe it has nothing to do with how much pain or pleasure the humans experience -- or in fact anything to do with the humans.
More broadly, I believe that although perhaps intelligent or ethical agents are somehow integral to the meaning of life, they are integral f...
I think claims like "exactly twice as bad" are ill-defined.
Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X, U' won't be.
Bu...
Krishnaswami: I think claims like "exactly twice as bad" are ill-defined. Suppose you have some preference relation on possible states R, so that X is preferred to Y if and only if R(X, Y) holds. Next, suppose we have a utility function U, such that if R(X, Y) holds, then U(X) > U(Y). Now, take any monotone transformation of this utility function. For example, we can take the exponential of U, and define U'(X) = 2^(U(X)). Now, note that U(X) > U(Y) if and only if that U'(X) > U'(Y). Now, even if U is additive along some dimension of X,...
"Polling people to find if they will take a dust speck grants an external harm to the torture (e.g., mental distress at the thought of someone being tortured)."
This is what remains invariant under a positive affine transformation.
(I haven't heard this pointed out anywhere, come to think, but surely it must have been observed before.)
Didn't Marcello point that out to you a couple years ago?
I think I've found one of the factors (besides scope insensivity) involved in the intuitive choice: in real life, a small amount of harm inflicted n times to one person has negative side-effects which don't happen when you inflict it once to n persons. Even though there aren't any in this thought experiment, we are so used to it we probably take it into account (at least I did).
Peter, I'm not sure what the chain of causality was there. (Let me know if I've previously written it down.) I think you or Nick Hay said that utility functions obey positive affine transformations, Marcello said that preserved the ratios of intervals, and I sketched out the interpretation for optimization problems.
I just meant that I haven't seen it elsewhere in the Literature. You're right, I should have credited the Summer of AI group.
Eisegetes, would you pull the lever if it would stop someone from being tortured for 50 years, but inflict one day of torture on each human being in the world? And if so, how about one year? or 10 years, or 25? In other words, the same problem arises as with the specks. Perhaps you can defend one punch per human being, but there must be some number of human beings for whom one punch each would outweigh torture.
Salutator, I never said utilitarianism is completely true.
Also: I wonder if Robin Hanson's comment shows concern about the lack of comments on his posts?
Hmmm... What can we actually agree on?
The disutility of a pain is a function of the Number of people who experience the pain, the Intensity of the pain, and the Time the pain lasts. It also an increasing function of all three: all else being equal, a pain experienced by more people is worse than one experienced by less people, a more intense pain is worse than a less intense pain, and a longer pain is worse than a shorter one. Or, more formally,
U = f(N,I,T)
âU/âN > 0 (for I,T > 0)
âU/âI > 0 (for N,T > 0)
âU/âT > 0 (for N,I > 0)...
Doug, I do not agree because my utility function depends on the identity of the people involved, not simply on N. Specifically, it might be possible for an agent to become confident that Bob is much more useful to whatever is the real meaning of life than Charlie is, in which case a harm to Bob has greater disutility in my system than a harm to Charlie. In other words, I do not consider egalitarianism to be a moral principle that applies to every situation without exception. So, for me U is not a function of (N,I,T)
There seems to be an unexamined assumption here.
Why should the moral weight of applying a specified harm to someone be independent of who it is?
When making moral decisions, I tend to weight effects on my friends and family most heavily, then acquaintences, then fellow Americans, and so on. I value random strangers to some extent, but this is based more on arguments about the small size of the planet than true concern for their welfare.
I claim that moral obligations must be reciprocal in order to exist. Altruism is never mandatory.
None of Eliezer's 3^^...
1: First of all, I want to acknowledge my belief that Eliezer's thought experiment is indeed usefuel, although it is "worse" than hypothetical. This is because it forces us to either face our psychological limitations when it comes to moral intuitions, or succumb to them (by arguing that the thought experiment is fundamentally unsound, in order to preserve harmony among our contradictive intuitions).
2: Once we admit that our patchwork'o'rules'o'thumb moral intuitions are indeed contradictive, the question remains if he is actually right. In anot...
Frank, re: #2: One can also believe option 4: that pleasure and pain have some moral significance, but do not perfectly determine moral outcomes. That is not necessarily irrational, it is not amoral, and it is not utilitarian. Indeed, I would posit that it represents the primary strand of all moral thinking and intuitions, so it is strange that it wasn't on your list.
Unknown: 10 years and I would leave the lever alone, no doubt. 1 day is a very hard call; probably I would pull the lever. Most of us could get over 1 day over torture in a way that is fundamentally different from years of torture, after all.
Perhaps you can defend one punch per human being, but there must be some number of human beings for whom one punch each would outweigh torture.
As I said, I don't have that intuition. A punch is a fairly trivial harm. I doubt I would ever feel worse about a lot of people (even 5^^^^^^5) getting punched than about a ...
Eisegetes: I admit your fourth option did not even enter my mind. I'll try (in a rather ad-hoc way) to dispute this on the grounds of computationalism. To be able to impose an order on conflicting options, it must be possible to reduced the combined expected outcomes (pleasure, displeasure, whatever else) into a single scalar value. Even if they are in some way lexically ordered, we can do this by projecting the lexical options onto non-intersecting intervals. Everything that is morally significant does, by virtue of the definition, enter into this calculus. Everything that doesn't, isn't.
If you feel this does not apply, please help me by elaborating your objection.
@Eisegates
Yes, I was operating on the implicit convention, that true statements must be meaningfull, so I could also say there is no k, so that I have exactly k quobbelwocks.
The nonexistence of a -operator (and of a +-operator) is actually the point. I don't think preferences of different persons can be meaningfully combined, and that includes, that {possible world-states} or {possible actions} don't, in your formulation, contain the sort of objects to which our everyday understanding of multiplication normally applies. Now if you insist on an intuitiv...
Eliezer:
no, Nick Hay and I were not involved at all. You mentioned this to us as something you and Marcello had discussed before the summer of AI.
Frank, I think a utility function like that is a mathematical abstraction, and nothing more. People do not, in fact, have scalar-ordered ranked preferences across every possible hypothetical outcome. They are essentially indifferent between a wide range of choices. And anyway, I'm sure that there is sufficient agreement among moral agents to permit the useful aggregation of their varied, and sometimes conflicting, notions of what is preferable into a single useful metric. And even if we could do that, I'm not sure that such a function would correspond ...
Salutator: thanks for clarifying. I would tend to think that physical facts like neural firings can be quite easily multiplied. I think the problem has less to do with the multiplying, than with the assumption that the number of neural firings is constitutive of wrongness.
Eliezer: So when I say that two punches to two faces are twice as bad as one punch, I mean that if I would be willing to trade off the distance from the status quo to one punch in the face against a billionth (probability) of the distance between the status quo and one person being tortured for one week, then I would be willing to trade off the distance from the status quo to two people being punched in the face against a two-billionths probability of one person being tortured for one week.
So alternatives that have twice the probability of some good thing ...
Eisegetes: This is my third posting now, and I hope I will be forgiven by the powers that be...
Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody's brain there must be a mechanism that decides which of the considered options wins the competition for "most moral option of the moment". Once the existence of this (personal) ordering is acknowledged (rationality), we can either disavow it (amorality) or try our best with what we have [always keeping in mind that the mechanisms at work are imp...
me: A < B < C < D doesn't imply that there's some k such that kA>D
Tomh: Yes it does.
As Salutator stated, perhaps I should not have used the notation I did in my example. What I mean by '<' in the context of harms is "is preferred to". What I meant when I said that there was no k such that kA > D is that the notion of multiplication does not make sense when applied to "is preferred to". Perhaps I should not have used the notation I did. Apologies for the confusion.
I looks to me like Eliezer plans to put humanism at the center of the intelligence explosion. I think that is a bad idea. I am horrified. I am appalled.
I wouldn't worry about it if I were you. One of the worst cases of yang excess I've ever seen.
Are you familiar with the concept of a Monkey Trap?
When I write U(N,I,T), I was trying to refer to the preferences of the person being presented with the scenario; if the person being asked the question was a wicked sadist, he might prefer more suffering to less suffering. Specifically, I was trying to come up with a "least common denominator" list of relevant factors that can matter in this kind of scenario. Apparently "how close I am to the person who suffers the pain" is another significant factor in the preferences, at least for Richard.
If we stipulate that, say, the pain is to be e...
The answer is simple. If you accept the bounds of the dust-speck argument where there is no further consequence of the dust-speck beyond the moment of irritation, then the cost of the irritation cannot be distinguished from 0 cost. If I can be assured that an event will have no negative consequences in my life beyond the quality of a moment of experience, then I wouldn't even think that the event is worth my consideration. Utility = 0. Multiply any number by 0, and you get 0. The only way for the dust-speck to have negative utility is if it has some sort of impact on my life beyond that moment. The dust-speck argument can't work without violating its own assumptions. Torture is worse. Case closed.
Adam, by that argument the torture is worth 0 as well, since after 1,000,000 years, no one will remember the torture or any of its consequences. So you should be entirely indifferent between the two, since each is worth zero.
But I guess the utility could be considered to be non-0 and without further impact if some individual would choose for it not to happen to them. All else being equal, I would rather not have my eye irritated (even if I had no further consequences). And even if cost is super-astronomically small, Eliezer could think up a super-duper astronomically large number by which it could be multiplied. I guess he was right.
I'm confused.
I think I'm done.
Richard Hollerith: "It looks to me like Eliezer plans to put humanism at the center of the intelligence explosion."
"Renormalized" humanism, perhaps; the outcome of which need not be anthropocentric in any way. You are a human being, and you have come up with some non-anthropocentric value system for yourself. This more or less demonstrates that you can start with a human utility function and still produce such an outcome. But there is no point in trying to completely ditch human-specific preferences before doing anything else; if you did that, you wouldn't even be able to reject paperclip maximization.
But you've changed the question.
I've added a wildcard, certainly, but I haven't changed the game. Say I'm standing there, lever in hand. While I can't be certain, I can fairly safely assume that if I went person to person and asked, the vast majority of those 3^^^3 would be personally willing to suffer a dust speck to save one person's torture. So I'm not necessarily polling, I'm just conjecturing. With this in mind, I choose specks.
[If I were to poll people, every now and then I would probably come across a Cold Hard Ratinoalist who said, "well, I'm ...
Ben: suppose the lever has a continuous scale of values between 1 and 3^^^3. When the lever is set to 1, 1 person is being tortured (and the torture will last for 50 years.). If you set it to 2, two people will be tortured by an amount less the first person by 1/3^^^3 of the difference between the 50 years and a dust speck. If you set it to 3, three people will be tortured by an amount less than the first person by 2/3^^^3 of the difference between the 50 years and the dust speck. Naturally, if you pull the lever all the way to 3^^^3, that number of people...
Unknown, that's a very interesting take indeed, and a good argument for Eliezer's proposition, but it doesn't say much about what to do if you can assume most of the 3^^^3 would ask for dust. Can you tell me what you would do purely in the context of my previous post?
If you set it to 2, two people will be tortured by an amount less the first person by 1/3^^^3 of the difference between the 50 years and a dust speck.
Of course not, this would be a no-brainer ratio for the lever to operate with. You should have said that position 2 on the lever tortures 2 peo...
To your voting scenario: I vote to torture the terrorist who proposes this choice to everyone. In other words, asking each one personally, "Would you rather be dust specked or have someone randomly tortured?" would be much like a terrorist demanding $1 per person (from the whole world), otherwise he will kill someone. In this case, of course, one would kill the terrorist.
I'm still thinking about the best way to set up the lever to make the point the most obvious.
What if everyone would be willing to individually suffer 10 years of torture to spare the one person? Obviously it's not better to torture 3^^^3 people for 10 years than one person for 50 years.
Obviously it's not better to torture 3^^^3 people for 10 years than one person for 50 years.
Obviously? There's that word again.
If it's really so obvious, please explain and elaborate on why it's not better.
Ben, here's my new and improved lever. It has 7,625,597,484,987 settings. On setting 1, 1 person is tortured for 50 years plus the pain of one dust speck. On setting 2, 3 persons are tortured for 50 years minus the pain of (50-year torture/7,625,597,484,987), i.e. they are tortured for a minute fraction of a second less than 50 years, again plus the pain of one dust speck. On setting 3, 3^3 persons, i.e. 27 persons, are tortured for 50 years minus two such fractions of a second, plus the pain of one dust speck. On setting 4, 3^27, i.e. 7,625,597,484,987 pe...
Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody's brain there must be a mechanism that decides which of the considered options wins the competition for "most moral option of the moment".
That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. Thus, most people would not say that one is behaving immorally by volunteering at a soup kitc...
It has 7,625,597,484,987 settings. On setting 1, 1 person is tortured for 50 years plus the pain of one dust speck. On setting 2, 3 persons are tortured for 50 years minus the pain of (50-year torture/7,625,597,484,987), i.e. they are tortured for a minute fraction of a second less than 50 years, again plus the pain of one dust speck. On setting 3, 3^3 persons, i.e. 27 persons, are tortured for 50 years minus two such fractions of a second, plus the pain of one dust speck. On setting 4, 3^27, i.e. 7,625,597,484,987 persons are tortured for 50 years minus 3...
Btw, I got the 0.0002 constant by finding the number number of seconds in 50 years and dividing by 7,625,597,484,987 (assuming 365 days per year). It's rounded. The actual number is around 0.00020678.
Ben:
"but a vote the other way potentially has 3^^^3 dust specks on your conscience - by your definition a much greater sin. Square one - shut up and vote!"
When presented with voting, each of the 3^^^3-1 people favored the dust specks (and their larger natural harm) to the torture (and its larger aggregated "mental distress"). The mental distress exists only on the basis of "sacred values". To say that in the face of 3^^^3-1 people preferring specks to torture, you should vote torture on the naive utility construction (no exte...
Ben P: the arrangement of the scale is meant to show that the further you move the lever toward 3^^^3 dust specks, the worse things get. The torture decreases linearly simply because there's no reason to decrease it by more; the number of people increases in the way that it does because of the nature of 3^^^3 (i.e. the number is large enough to allow for this). The more we can increase it at each stop, the more obvious it is that we shouldn't move the lever at all, but rather we should leave it at torturing 1 person 50 years.
The torture decreases linearly simply because there's no reason to decrease it by more; the number of people increases in the way that it does because of the nature of 3^^^3 (i.e. the number is large enough to allow for this)
I don't see how that follows. Even the progression from the first setting to the second setting seems arbitrary. You've established a progression from one scenario (torturing a person for 50 years) to another (3^^^3 dust specks) but to me it just seems like one possible progression. I see no reason to set up the intermediate stages lik...
My own anti-preference function seems to have a form something like this:
U(N,I,T) = kI(1 - e^(-NT/a))
where a and k are constants with appropriate units.
Relevant "intuitions" not listed before:
1) For the purposes of this thought experiment, who suffers a pain doesn't matter. Therefore:
1a) Transferring an instant of pain from one person to another, without changing the (subjective) intensity of the pain, doesn't change the "badness" of the situation. Two people suffering torture for 25 years simultaneously equals one person suffering 2...
Naturally the T(s) function I posted earlier was wrong. It should have been T(s)=1576800000-0.0002(s-1). However, that doesn't change my question.
There is yet another angle on this dilemma which hasn't been raised yet. How bad is the outcome you are willing to prefer, in order to avoid those 3^^^3 dust specks? Are you willing to have the torture victim killed after the 50 years? How about all life on Earth? How about all life in the visible universe? I presume that truly convinced additivists will say yes in every case, because they "know" that 3^^^3 dust specks would still be incomprehensibly worse.
Notice that in Doug's function, suffering with intensity less than 0.393 can never add up to 50 years of torture, even when multiplied infinitely, while suffering of 0.394 will be worse than torture if it is sufficiently multiplied. So there is some number of 0.394 intensity pains such that no number of 0.393 intensity pains can ever be worse, despite the fact that these pains differ by 0.001, stipulated by Doug to be the pain of a dust speck. This is the conclusion that I pointed out follows with mathematical necessity from the position of those who prefer the specks.
Doug, do you actually accept this conclusion (about the 0.393 and 0.394 pains), or you just trying to show that the position is not logically impossible?
Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie ...
Unknown, I'll bite. While you do point out some extremely counterintuitive consequences of positing that harms aggregate to an asymptote, accepting the dust specks as being worse than the torture is also extremely counterintuitive to most people.
For the moment, I accept the asymptote position, including the mathematical necessity you've pointed out.
So far this discussion has focused on harm to persons. But there are other forms of utility and disutility. Here's the intuition pump I used on myself: the person concept is not so atomic to resist quantificati...
So there is some number of 0.394 intensity pains such that no number of 0.393 intensity pains can ever be worse, despite the fact that these pains differ by 0.001, stipulated by Doug to be the pain of a dust speck.
Let's see just what that number is...
0.394(1-e^(-NT/100 personyears) > 0.393
1-e(-NT/100 personyears) > 0.998
e^(-NT/100 personyears) < 0.002538
-NT/100 personyears < -5.976
NT > 597.6 person*years
In terms of the constants, it comes out to NT > -a*ln(1-I1/I2), where I1 is the lesser pain and I2 is the greater pain. This does st...
Richard, my understanding is that CEV is not democracy, not by design anyway. Think of any individual human being as a combination of some species-universal traits and some contingent individual properties. CEV, I would think, is about taking the preference-relevant cognitive universals and extrapolating an ideal moral agent relative to those. The contingent idiosyncrasies or limitations of particular human beings should not be a factor.
At your website, you propose that "objective reality" is the locus of intrinsic value, sentient beings have on...
Z.M. Davis, that's an interesting point about the slugs, I might get to it later. However, I suspect it has little to do with the torture and dust specks.
Doug, here's another problem for your proposed function: according to your function, it doesn't matter whether a single person takes all the pain or if it is distributed, as long as it sums to the same amount according to your function.
So let's suppose that the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture. According to this, would you hon...
Utility doesn't aggregate. Neither human lives. You don't use 4, you have to use 1+1+1+1. If you aggregate human lives, you get diminishing marginal value for huma life/ Goverment does it. Millitary does it. You send a squad to suicide missoin to save the division. A la guerre com ala guerre. So I agree with Jadagul. Preference is a tricky subject , in which, there is always marginal utility.
But since you used economic term of utility here is a simple economic question upon aggregate utility:
You are the Goverment. You need to raise 1 Million $ for, le...
Unknown, I think the slugs are relevant. I should think most of us would agree that all other things being equal, a world with less pain is better than one with more, and a world with more intelligent life is better than one with less.
Defenders of SPECKS argue that the quality of pain absolutely matters: that the pain of no amount of dust specks could add up to that of torture. To do this, they must accept the awkward position that the badness of an experience partially depends on how many other people have suffered it. Defenders of TORTURE say, "Shut...
Still haven't heard from even one proponent of TORTURE who would be willing to pick up the blowtorch themselves. Kind of casts doubt on the degree to which you really believe what you are asserting.
I mean, perhaps it is the case that although picking up the blowtorch is ethically obligatory, you are too squeamish to do what is required. But that should be overrideable by a strong enough ethical imperative. (I don't know if I would pick up the blowtorch to save the life of one stranger, for instance, but I would feel compelled to do it to save the popu...
About the slugs, there is nothing strange in asserting that the utility of the existence of something depends partly on what else exists. Consider chapters in a book: one chapter might be useless without the others, and one chapter repeated several times would actually add disutility.
So I agree that a world with human beings in it is better than one with only slugs: but this says nothing about the torture and dust specks.
Eisegetes, we had that discussion previously in regard to the difference between comparing actions and comparing outcomes. I am fairly su...
Unknown,
"So given an asymptote utility function (which I don't accept), it shouldn't matter if one more person is tortured for 50 years."
With such an asymptotic utility function your calculations will be dominated by the possible worlds in which there are few other beings.
I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it's known or not.
You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam's Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias...
Unknown, it seems like what you are doing is making a distinction between a particular action being obligatory -- you do not feel like you "ought" to torture someone -- and its outcome being preferable -- you feel like it would be better, all other things being equal, if you did torture the person.
Is that correct? If it isn't, I have trouble seeing why the g64 variant of the problem wouldn't overcome your hesitation to torture. Or are you simply stating a deontological side-constraint -- I will never torture, period, not even to save the lives ...
So let's suppose that the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture. According to this, would you honestly choose to suffer the solitary confinement for 3^^^3 years, rather than the 50 years torture?
You've already defined the answer; "the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture." If that's so, then shouldn't I say yes?
To some extent, my preferences do tell me to work on a "minimize the worst pain ...
I understand that choosing specks theoretically leads to an overall decrease in happiness in the universe. One (irrational, given my previous conclusion) thought, however, always seems to dominate my interior monologue about specks vrs. torture - if someone were to ask me whether or not I would take a dust speck in the eye to save someone from 50 years of torture, I would do it (as I would expect most people to). I realize that I would have to take 3^^^3 dust specks for the problem to match the original question (and I wouldn't be willing to get 3^^^3 du...
Phil, a sufficiently altruistic person would accept 25 years of torture to spare someone else 50, but that doesn't mean it's better to torture 3^^^3 people for 25 years (even if they're all willing) than one person for 50 years.
If you call a utilitarian's utility function T, then you can pick the dust specks over torture if your utility function is -T.
I'm taking the discussion with Richard to email; if it issues in anything I suppose it will end up on his website.
Eisegetes (please excuse the delay):
That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. [...] Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions -- in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember -- we are dealing here not with all possible brain states, but with all possible states of the porti...
That's a confusion. I was explicitly talking of "moral" circuits.
Well, that presupposes that we have some ability to distinguish between moral circuits and other circuits. To do that, you need some other criteria for what morality consists in than evolutionary imperatives, b/c all brain connections are at least partially caused by evolution. Ask yourself: what decision procedure would I articulate to justify to Eisegetes that the circuits responsible for regulating blinking, for creating feelings of hunger, or giving rise to sexual desire are,...
Eisegetes:
Well I (or you?) really maneuvered me into a tight spot here.
About those options, you made a goot point.
To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
I don't know of any satisfiying definition of morality. I probably must involve actions that are neither taylored for personal nor inclusive fitness. I suppose the best I can come up with is "A moral action is one which you choose (== that makes you feel good) with...
"'A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes.'"
So using birth control is an inherently moral act? Overeating sweet and fatty foods to the point of damaging your health is an inherently moral act? Please. "Adaptation-executers," &c.
ZMD:
C'mon gimme a break, I said it's not satisfying!
I get your point, but I dare you to come up with a meaningful but unassailable one-line definition of morality yourself!
BTW birth control certainly IS moral, and overeating is just overdoing a beneficial adaption (i.e. eating).
To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
Well, I would ask whether the decision in question is one that people (including me) normally refer to as a moral decision. "Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their societ...
Eisegetes:
"Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society.
From a practical POV, without any ambitions to look under the hood, we can just draw this "ordinary language defense line", as I'd call it. Where it gets interesting from an Evolutionary Psychology POV is exactly those "inner shame/disgust/disapproval registers". Th...
Using a number big enough not to do the math is just a way of assigning 1 under any other name.
Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.
It seems to be an unsubstantiated slur on other moral systems :-(
I notice I'm confused here. The morality is a computation. And my computation, when given the TORTURE vs SPECKS problem as input, unambiguously computes SPECKS. If probed about reasons and justifications, it mentions things like "it's unfair to the tortured person", "specks are negligible", "the 3^^^3 people would prefer to get a SPECK than to let the person be tortured if I could ask them", etc.
There is an opposite voice in the mix, saying "but if you multiply, then...", but it is overwhelmingly weaker.
I assume, sin...
...Gowder did not say what he meant by "utilitarianism". Does utilitarianism say...
- That right actions are strictly determined by good consequences?
- That praiseworthy actions depend on justifiable expectations of good consequences?
- That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs?
- That virtuous actions always correspond to maximizing expected utility under some utility function?
- That two harmful events are worse t
A link to Gowder's argument would be a good thing to have here. Never mind, I found it.
Some of what you're saying here makes me think that the post about Nature vs. Nature (that might not be the exact title but it was something similar) would be more relevant to his argument. He might be contending that you're trying to use intuitions which presume utilitarianism to justify utilitarianism, but you're ignoring other intuitions such as scope insensitivity. Scope insensitivity is only a problem if we presume utilitarianism correct. If we presume scope insensi...
I believe that the vast majority of people in the dust speck thought experiment would be very willing to endure the collision of the dust speck, if only to play a small role in saving a man from 50 years of torture. I would choose the dust specks on the behalf of those hurt by the dust specks, as I can be very close to certain that most of them would consent to it.
A counterargument might be that, since 3^^^3 is such a vast number, the collective pain of the small fraction of people who would not consent to the dust speck still multiplies to be far larger t...
"It is more important that lives be saved, than that we conform to any particular ritual in saving them" is a major moral rule by itself, directly contradicted by, I believe, many if not most religions claiming to be sources of morality. "It does not matter that you saved more lives if you prayed to different gods/did not pray enough to ours" seems to be quite a repeating idea (also with gods replaced by political systems - advocates of Leninism tend to claim that capitalism is immoral despite having no Golodomor in its actual history).
Among other things, if you try to violate “utilitarianism”, you run into paradoxes, contradictions, circular preferences, and other things that aren’t symptoms of moral wrongness so much as moral incoherence.
Nobody seems to have problems with circular preferences in practice, probably because people's preferences aren't precise enough. So we don't have to adopt utilitarianism to fix this non-problem.
But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority
People aren't going to be doing ethical calculations using hyperrreal numbers, and they aren't going to be doing them with real numbers eithe...
...I don't say that morality should always be simple. I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more comple
(Still no Internet access. Hopefully they manage to repair the DSL today.)
I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet. I used to be very confused about metaethics. After my confusion finally cleared up, I did a postmortem on my previous thoughts. I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless. And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad". Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.
Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.
Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition". He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.
Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed.
I see the project of morality as a project of renormalizing intuition. We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles.
Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock.
Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies.
"Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from. Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera.
So that is "intuition".
However, Gowder did not say what he meant by "utilitarianism". Does utilitarianism say...
If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy.
Now, what are the "intuitions" upon which my "utilitarianism" depends?
This is a deepish sort of topic, but I'll take a quick stab at it.
First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive". Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence.
After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out. This does not quite define moral progress, but it is how we experience moral progress.
As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there? (Could that be what Gowder means by saying I'm "utilitarian"?)
The question of where a road goes - where it leads - you can answer by traveling the road and finding out. If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.
When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth. You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error).
But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place.
Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up. After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions.
When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing...
Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic:
There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact.
If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously. So people are willing to pay more to help one child than to help eight.
Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune.
But what about the billions of other children in the world? Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down? How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417?
Or you could look at that and say: "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking."
And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities. It's just that the brain doesn't goddamn multiply. Quantities get thrown out the window.
If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives. Likewise if such choices are made by 10 different people, rather than the same person. As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives.
(It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways. But the long run is a helpful intuition pump, so I am talking about it anyway.)
The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time.
Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe. Three lives are one and one and one. No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation. And if you add another life you get 4 = 1 + 1 + 1 + 1. That's aggregation.
When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn't goddamn multiply.
The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.
When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities.
Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering".
And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.
So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it. Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life". Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.
The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.
On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.
But you don't conclude that there are actually two tiers of utility with lexical ordering. You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don't conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.
As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.
Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.
I don't say that morality should always be simple. I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know.
But that's for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey. When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."
Where music is concerned, I care about the journey.
When lives are at stake, I shut up and multiply.
It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math.
And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.
</rant>