This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.

Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.

Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!

Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.

This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.

This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").

EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog. 

 

New to LessWrong?

New Comment
151 comments, sorted by Click to highlight new comments since: Today at 5:52 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What is sometimes called "the 1000 Sadists problem" is a classic "problem" in utilitarianism; this post is another version of it.

Here's another version, which apparently comes from this guy's homework:

Suppose that the International Society of Sadists is holding its convention in Philadelphia and in order to keep things from getting boring the entertainment committee is considering staging the event it knows would make the group the happiest, randomly selecting someone off the street and then torturing that person before the whole convention. One member of the group, however, is taking Phil. 203 this term and in order to make sure that such an act would be morally okay insists that the committee consult what a moral philosopher would say about it. In Smart's essay on utilitarianism they read that "the only reason for performing an action A rather than an alternative action B is that doing A will make mankind (or, perhaps, all sentient beings) happier than will doing B." (Smart, p. 30) This reassures them since they reason that the unhappiness which will be felt by the victim (and perhaps his or her friends and relatives) will be far outweighed by the

... (read more)
5V_V12y
That's a reverse version of the utility monster scenario. Act utilitarianism always leads to these kind of paradoxes. I don't think it can be salvaged.

(shrug) Sure, I'll bite this bullet.

Yes, if enough people are made to suffer sufficiently by virtue of my existence, and there's no way to alleviate that suffering other than my extermination, then I endorse my extermination.
To do otherwise would be unjustifiably selfish.

Which is not to say I would necessarily exterminate myself, if I had sufficiently high confidence that this was the case... I don't always do what I endorse.

And if it's not me but some other individual or group X that has that property in that hypothetical scenario, I endorse X's extermination as well.

And, sure, if you label the group in an emotionally charged way (e.g., "Nazis exterminating Jews" as you do here), I'll feel a strong emotional aversion to that conclusion (as I do here).

8J_Taylor12y
Be careful, TheOtherDave! Utility Monsters are wily beasts.
1TheOtherDave12y
(nods) Yup. A lot of the difficulty here, of course, as in many such scenarios, is that I'm being asked to consider the sufferers in this scenario people, even though they don't behave like any people I've ever known. That said, I can imagine something that suffers the way they do and that I still care about alleviating the suffering of. The threshold between what I care about and what I don't is, as always, pretty friggin arbitrary.
4Dolores198412y
Really? Screw that. If my existence makes other people unhappy, I'm entirely fine with that. It's not any of their business anyway. We can resolve the ethical question the old-fashioned way. They can try to kill me, and I can try to kill them right back.

It's not any of their business anyway.

If things that make me unhappy aren't my business, what is my business?

But whether your existence makes me unhappy or not, you are, of course, free not to care.
And even if you do care, you're not obligated to alleviate my unhappiness. You might care, and decide to make me more unhappy, for whatever reasons.

And, sure, we can try to kill each other as a consequence of all that.
It's not clear to me what ethical question this resolves, though.

3Viliam_Bur12y
Here is a more difficult scenario: I am a mind uploaded to a computer and I hate everyone except me. Seeing people dead would make me happy; knowing they are alive makes me suffer. (The suffering is not big enough to make my life worse than death.) I also have another strong wish -- to have a trillion identical copies of myself. I enjoy the company of myself, and trillion seems like a nice number. What is the Friendly AI, the ruler of this universe, supposed to do? My life is not worse than death, so there is nothing inherently unethical in me wanting to have a trillion copies of myself, if that is economically available. All those copies will be predictably happy to exist, and even happier to see their identical copies around them. However, in the moment when my trillion identical copies exist, their total desire to see everyone else dead will become greater than the total desire of all others to live. So it would be utility maximizing to kill the others. Should the Friendly AI allow it or disallow it... and what exactly would be its true rejection?
7TheOtherDave12y
There are lots of hippo-fighting things I could say here, but handwaving a bit to accept the thrust of your hypothetical... a strictly utilitarian FAI of course agrees to kill everyone else (2) and replace them with copies of you (1). As J_Taylor said, utility monsters are wily beasts. I find this conclusion intuitively appalling. Repugnant, even, Which is no surprise; my ethical intuitions are not strictly utilitarian. (3) So one question becomes, are the non-utilitarian aspects of my ethical intuitions something that can be applied on these sorts of scales, and what does that look like, and is it somehow better than a world with a trillion hateful Viliam_Burs (1) and nobody else? I think it isn't. That is, given the conditions you've suggested, I think I endorse the end result of a trillion hateful Viliam_Burs (1) living their happy lives and the appalling reasoning that leads to it, and therefore the FAI should allow it. Indeed, should enforce it, even if no human is asking for it. But I'm not incredibly confident of that, because I'm not really sure I'm doing a good enough job of imagining that hypothetical world for the things I intuitively take into consideration to fully enter into those intuitive calculations. For example, one thing that clearly informs my intuitions is the idea that Viliam_Bur in that scenario is responsible (albeit indirectly) for countless deaths, and ought to be punished for that, and certainly ought not be rewarded for it by getting to inherit the universe. (4) But of course that intuition depends on all kinds on hardwired presumptions about moral hazard and your future likelihood to commit genocide if rewarded for your last genocide and so forth, and it's not clear that any such considerations actually apply in your hypothetical scenario... although it's not clear that they don't, either. There are a thousand other factors like that. Does that answer your question? === (1) Or, well, a trillion something. I really don't know wha
1Viliam_Bur12y
(1) and (3) -- Actually my original thought was "a trillion in-group individuals (not existing yet) who like each other and hate the out-groups", but then I replaced it with trillion copies to avoid possible answers like: "if they succeed to kill all out-groups, they will probably split into subgroubs and start hating out-subgroups". Let's suppose that the trillion copies, after exterminating the rest of the universe, will be happy. The original mind may even wish to have those individuals created hard-wired to feel like this. (2) -- What if someone else wants trillion copies too, but expresses their wish later? Let's assume there are two such hateful entities, let's call them A and B. Their copies do not exist yet -- so it makes sense to create trillion copies of A, and kill everyone else including (the single copy of) B; just as it makes sense to create trillion copies of B and kill everyone else including (the single copy of) A. Maybe the first one who expresses their wishes win. Or it may be decided by considering that trillion As would be twice as happy as trillion Bs, therefore A wins. Which could be fixed by B wishing for ten trillion copies instead. But generally the idea was that calculations about "happiness for most people" can be manipulated if some group of people desires great reproduction (assuming their children will mostly inherit their preferences), which gradually increases the importance of wishes of given group. Even the world ruled by utilitarian Friendly AI would allow fights between groups, where the winning strategy is to "wish for a situation, where it is utilitarian to help us and to destroy our enemies". In such world, the outside-hateful inside-loving hugely reproducing groups with preserved preferences would have an "evolutionary advantage", so they would gradually destroy everyone else.
0TheOtherDave12y
(nods) I'm happy to posit that the trillion ViliamBur-clones, identical or not, genuinely are better off; otherwise of course the entire thing falls apart. (This isn't just "happy," and it's hard to say exactly what it is, but whatever it is I see no reason to believe it's logically incompatible with some people just being better at it than others. In LW parlance, we're positing that ViliamBur is much better at having Fun than everybody else. In traditional philosophical terms, we're positing that ViliamBur is a Utility Monster.) No. That the copies do not exist yet is irrelevant. The fact that you happened to express the wish is irrelevant, let alone when you did so. What matters is the expected results of various courses of action. In your original scenario, what was important was that the expected result of bulk-replicating you was that the residents of the universe are subsequently better off. (As I say, I reluctantly endorse the FAI doing this even against your stated wishes.) In the modified scenario where B is even more of a Utility Monster than you are, it bulk-replicates B instead. If the expected results of bulk-replicating A and B are equipotential, it picks one (possibly based on other unstated relevant factors, or at random if you really are equipotential). Incidentally, one of the things I had to ignore in order to accept your initial scenario was the FAI's estimated probability that, if it doesn't wipe everyone else out, sooner or later someone even more utility-monsterish than you (or B) will be born. Depending on that probability, it might not bulk-replicate either of you, but instead wait until a suitable candidate is born. (Indeed, a utilitarian FAI that values Fun presumably immediately gets busy constructing a species more capable of Fun than humans, with the intention of populating the universe with them instead of us.) Again, calculations about utility (which, again, isn't the same as happiness, though it's hard to say exactly what it is)
0Viliam_Bur12y
Oh. I would hope that the FAI would instead turn us into the species most capable of fun. But considering the remaining time of the universe and all the fun the new species will have there, the difference between (a) transforming us or (b) killing us and creating the other species de novo, is negligible. The FAI would probably choose the faster solution, because it would allow more total fun-time for the superhappies. If there are more possible superhappy designs, equivalent in their fun-capacity, the FAI would chose the one that cares about us the least, to reduce their possible regret of our extinction. Probably something very unsimilar to us (as much as the definition of "fun" allows). They would care about us less than we care about the dinosaurs.
0TheOtherDave12y
Faster would presumably be an issue, yes. Minimizing expected energy input per unit Fun output would presumably also be an issue. Of course, all of this presumes that the FAI's definition of Fun doesn't definitionally restrict the experience of Fun to 21st-century humans (either as a species, or as a culture, or as individuals). Unrelatedly, I'm not sure I agree about regret. I can imagine definitions of Fun such that maximizing Fun requires the capacity for regret, for example.

Well, this can easily become a costly signalling issue when the obvious (from the torture-over-speck-supporter's perspective) comment would read "it is rational for the Nazis to exterminate the Jews". I would certainly not like to explain having written such a comment to most people. Claiming that torture is preferable to dust specks in some settings is comparably harmless.

Given this, you probably shouldn't expect honest responses from a lot of commenters.

if you are a specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to [harm Jews]

The use of "specker" to denote people who prefer torture to specks can be confusing.

4Kindly12y
Let's call them "torturers" instead. Edit: or "Nazis".

Wait, are you calling me a Nazi?

0Luke_A_Somers12y
Speck-free?
0shminux12y
Edited, thanks.

This probably would have been better if you'd made it Venusians and Neptunians or something.

But wouldn't that defeat the purpose, or am I missing something? I understood the offensiveness of the specific example to be the point.

9palladias12y
Right, I thought the point was showing people are viscerally uncomfortable with the result of this line of reasoning and make them decide whether they reject (a) the reasoning (b) the discomfort or (c) the membership of this example in the torture vs specks class
-2fubarobfusco12y
That's called "trolling", yes?

Trolling usually means disrupting the flow of discussion by deliberate offensive behaviour towards other participants. It usually doesn't denote proposing a thought experiment with a possible solution that is likely to be rejected for its offensiveness. But this could perhaps be called "trolleying".

2shminux12y
One of the best ever puns I recall on this forum!

I've considered using neutral terms, but then it is just too easy to say "well, it just sucks to be you, Neptunian, my rational anti-dust-specker approach requires you to suffer!"

It's a bad sign if you feel your argument requires violating Godwin's Law in order to be effective, no?

Not strictly. It's still explicitly genocide with Venusians and Neptunians -- it's just easier to ignore that fact in the abstract. Connecting it to an actual genocide causes people to reference their existing thinking on the subject. Whether or not that existing thinking is applicable is open for debate, but the tactic's not invalid out of hand.

8prase12y
The supposed positive (making the genocide easier to imagine) is however outweighed by a big negative of the connotations brought by the choice of terminology. It was certainly not true about the Nazis that their hatred towards the Jews was an immutable terminal value and the "known to be in general quite rational" part is also problematic. Of course we shouldn't fight the hippo, but it is hard to separate the label "Nazi" from its real meaning. As a result, the replies to this post are going to be affected by three considerations: 1) the commenters' stance towards the speck/torture problem, 2) their ability to accept the terms of a hypothetical while ignoring most connotations of used terminology, and 3) their courage to say something which may be interpreted as support for Nazism by casual readers. Which makes the post pretty bad as a thought experiment supposed to inquire only the first question.
4Dolores198412y
I suppose that's fair. I do think that trying to abstract away the horror of genocide is probably not conducive to a good analysis, either, but there may be an approach better suited to this that does not invoke as much baggage.
8DanArmak12y
It's a bad sign if you feel your ethics don't work (or shouldn't be talked about) in an important, and real, case like the Nazis vs. Jews.
6orthonormal12y
* Reversed Stupidity Is Not Intelligence
4DanArmak12y
I'm not saying genocide is bad because Hitler did it. I'm saying it's bad for other reasons, regardless of who does it, and Hitler should not be a special case either way. n your previous comment you seemed to be saying that a good argument should be able to work without invoking Hitler. I'm saying that a good argument should also be able to apply to Hitler just as well as to anyone else. Using Hitler as an example has downsides, but if someone claims the argument actually doesn't work for Hitler as well as for other cases, then by all means we should discuss Hitler.
-2shminux12y
It is also a bad sign if you invoke TWAITW. If you check the law, as stated on Wikipedia, it does not cover my post: You can sort of make your case that it is covered by one of the Corollaries: except for the proposed amendment: Which is exactly what I was doing (well, one out of three, so not exactly).

As discussed there, pointing out that it has this feature isn't always the worst argument in the world. If you have a coherent reason why this argument is different from other moral arguments that require Godwin's Law violations for their persuasiveness, then the conversation can go forward.

EDIT: (Parent was edited while I was replying.) If "using Jews and Nazis as your example because replacing them with Venusians and Neptunians would fail to be persuasive" isn't technically "Godwin's Law", then fine, but it's still a feature that correlates with really bad moral arguments, unless there's a relevant difference here.

5Raemon12y
This is a bit of a fair point. I guess I'd have written the hypothetical in a few stages to address the underlying issue, which is presumably is either: 1) what happens if it turns out humans don't have compatible values? 2) How does our morality handle aliens or transhumans with unique moralities? What if they are almost identical to our own? I don't think the babyeater story provided an answer (and I don't have one now) but I felt like it addressed the issue in an emotionally salient way that wasn't deceptive.
3[anonymous]12y
But then we all know what people's answers would be. I think his the point is that if you took a Martian or Neptunian who happens to really hate Venusians and likes Neptunians in his native universe and presented him with a universe similar to the OP he would most likely not behave like a utilitarian he claims he is or wants to be. That's not really much of a problem. The problem is that he is likely to come up with all sorts of silly rationalizations to cover up his true rejection.

It is just a logical conclusion from "dust specks". You can/must do horrible things to a small minority, if a large majority members benefit a little from that.

Another part of the Sequence I reject.

5SilasBarta12y
Wait, what was the conclusion of dust specks? I'm guess "torture", but then, why is this conclusion so strong and obvious (after the fact)? I had always been on the dust specks side, for a few reasons, but I'd like to know why this position is so ridiculous, and I don't know even despite having participated in those threads.
8ShardPhoenix12y
The problem attempts to define the situation so that "torture" is utility maximizing. Therefore if you are a utility maximizer, "torture" is the implied choice. The problem is meant to illustrate that in extreme cases utility maximization can (rightly or wrongly) lead to decisions that are counter-intuitive to our limited human imaginations.
6Thomas12y
For me, the sum of all the pains isn't a good measure for the dreadfulness of a situation. The maximal pain is a better one. But I don't think it is more than a preference. It is may preference only. Like a strawberry is better than a blueberry. For my taste, the dust specks for everybody is better than a horrible torture for just one. Ask yourself, in which world would you want to be in all the roles.
5ArisKatsaris12y
It's worse to break the two legs of a single man than to break one leg each of seven billion people? If a genie forced you to choose between the two options, would you really prefer the latter scenario? I'm sorry, but I really can't imagine the size of 3^^^3. So I really can't answer this question by trying to imagine myself filling all those roles. My imagination just fails at that point. And if anyone here thinks they can imagine it, I think they're deluding themselves. But if anyone wants to try, I'd like to remind them that in a random sample there'd probably be innumerable quintillions of people that would already be getting tortured for life one way or another. You're not removing all that torture if you vote against torturing a single person more.
0Thomas12y
First, I would eliminate two leg breaking. Second, one leg breaking. Of course, an epidemic one leg breaking would have othere severe effects like starvation to death and alike. What should come even before two broken legs. In a clean abstract world of just a broken leg or two per person, with no further implications, the maximal pain is stil the first to be eliminated, if you ask me.
8CarlShulman12y
From behind the veil of ignorance, would you rather have a 100% chance of one broken leg, or a 1/7,000,000,000 chance of two broken legs and 6,999,999,999/7,000,000,000 chance of being unharmed?
1Thomas12y
I would opt for two broken legs with a small probability, of course. In your scenario. But I would choose one broken leg, if that would mean that the total amount of two broken legs would go to zero then. In another words. I would vaccinate everybody (the vaccination causes discomfort) to eliminate a deadly disease like Ebola which kills few. What would you do?
2CarlShulman12y
Creatures somewhere in existence are going to face death and severe harm for the foreseeable future. This view then seems inert. There are enough minor threats with expensive countermeasures (more expensive as higher reliability is demanded) that this approach would devour all available wealth. It would bar us from, e.g. traveling for entertainment (risk of death exists whether we walk, drive, or fly). I wouldn't want that tradeoff for society or for myself.
1TheOtherDave12y
I would endorse choosing a broken leg for one person if that guaranteed that nobody in the world had two broken legs, certainly. This seems to have drifted rather far from the original problem statement. I would also vaccinate a few billion people to avoid a few hundred deaths/year, if the vaccination caused no negative consequences beyond mild discomfort (e.g., no chance of a fatal allergic reaction to the vaccine, no chance of someone starving to death for lack of the resources that went towards vaccination, etc). I'm not sure I would vaccinate a few billion people to avoid a dozen deaths though... maybe, maybe not. I suspect it depends on how much I value the people involved. I probably wouldn't vaccinate a few billion people to avoid a .000001 chance of someone dying. Though if I assume that people normally live a few million years instead of a few dozen, I might change my mind. I'm not sure though... it's hard to estimate with real numbers in such an implausible scenario; my intuitions about real scenarios (with opportunity costs, knock-on effects, etc.) keep interfering. Which doesn't change my belief that scale matters. Breaking one person's leg is preferable to breaking two people's legs. Breaking both of one person's legs is preferable to breaking one of a million people's legs.
-2ArisKatsaris12y
I don't think you understand the logic behind the anti-speckers's choice. It isn't that we always oppose the greater number of minor disutilities. It's that we believe that there's an actual judgment to be made given the specific disutilities and numbers involved -- you on the other hand just ignore the numbers involved altogether. I would vaccinate everyone to eradicate Ebola which kills few. But I would not vaccinate everyone to eradicate a different disease that mildly discomforts few only slightly more so than the vaccination process itself.
-2Thomas12y
The logic is: Integrate two evils through time and eliminate that which has a bigger integral! I just don't agree with it.
-1ArisKatsaris12y
May I ask if you consider yourself a deontologist, a consequentialist, or something else?
5TheOtherDave12y
Agreed that introducing knock-on effects (starvation and so forth) is significantly changing the scenario. I endorse ignoring that. Given seven billion one-legged people and one zero-legged person, and the ability to wave a magic wand and cure either the zero-legged person or the 6,999,999,999 one-legged people, I heal the one-legged people. That's true even if I have the two broken legs. That's true even if I will get to heal the other set later (as is implied by your use of the word "first"). If I've understood you correctly, you commit to using the wand to healing my legs instead of healing everyone else. If that's true, I will do my best to keep that wand out of your hands.
-1Thomas12y
So, you would do everything you can, to prevent a small probability, but very bad scenario? Wouldn't you just neglect it?
0TheOtherDave12y
I would devote an amount of energy to avoiding that scenario that seemed commensurate with its expected value. Indeed, I'm doing so right now (EDIT: actually, on consideration, I'm devoting far more energy to it than it merits). If my estimate of the likelihood of you obtaining such a wand (and, presumably, finding the one person in the world who is suffering incrementally more than anyone else and alleviating his or her suffering with it) increases, the amount of energy I devote to avoiding it might also increase.
5ArisKatsaris12y
Different people had different answers. Eliezer was in favor of torture. I am likewise. Others were in favor of the dust specks. If you want to know why some particular person called your position ridiculous, perhaps you should ask whatever particular person so called it. My own argument/illustration is that for something to be called the ethically right choice, things should work out okay if more people chose it, the more the better. But in this case, if a billion people chose dust-specks or the equivalent thereof, then whole vast universes would be effectively tortured. A billion tortures would be tragic, but it pales in comparison to a whole universe getting tortured. Therefore dust-specks is not a universalizable choice, therefore it's not the ethically right choice.
1SilasBarta12y
Nobody did; I was replying to the insinuation that the insinuation that it must be ridiculous, regardless of the reasoning. That doesn't work if this is a one-off event, and equating "distributed" with "concentrated" torture requires resolution of the multiperson utility aggregation problem and so would be hard to consider either route ridiculous (as implied by the comment where I entered the thread).
2ArisKatsaris12y
The event doesn't need to be repeated, the type of event needs to be repeated (whether you'll choose a minor disutility spread to many, or a large disutility to one). And these type of choices do happen repeatedly, all the time, even though most of them aren't about absurdly large numbers like 3^^^3 or absurdly small disutilities like a dust speck. Things that our mind isn't made to handle. If someone asked you whether it'd be preferable to save a single person from a year's torture, but in return a billion people would have to get their legs broken -- I bet you'd choose to leave the person tortured; because the numbers are a bit more reasonable, and so the actual proper choice is returned by your brain's intuition...
2SilasBarta12y
But that's assuming they are indeed the same type (that the difference in magnitude does not become a difference in type); and if not, it would make a difference whether or not this choice would in fact generalize. No, I wouldn't, and for the same reason I wouldn't in the dust specks case: the 3^^^3 can collectively buy off the torturee (i.e. provide compensation enough to make the torture preferable given it) if that setup is Pareto-suboptimal, while the reverse is not true. [EDIT to clarify the above paragraph: if we go with the torture, and it turns out to be pareto-suboptimal, there's no way the torturee can buy off the 3^^^3 people -- it's a case where willingness to pay collides with the ability to pay (or perhaps, accept). If the torturee, in other words, were offered enough money to buy off the others (not part of the problem), he or she would use the money for such a payment. In contrast, if we went with the dust specks, and it turned out to be Pareto-suboptimal, then the 3^^^3 could -- perhaps by lottery -- come up with a way to buy off the torturee and make a Pareto-improvement. Since I would prefer we be in situations that we can Pareto-improve away from vs those that can't, I prefer the dust specks. Moreover, increasing the severity of the disutility that the 3^^^3 get -- say, to broken legs, random murder, etc -- does not change this conclusion; it just increases the consumer surplus (or decreases the consumer "deficit") from buying off the torturee. /end EDIT] Whatever error I've made here does not appear to stem from "poor handling of large numbers", the ostensible point of the example.

Imagine if humanity survives for the next billion years, expands to populate the entire galaxy, has a magnificent (peaceful, complex) civilization, and is almost uniformly miserable because it consists of multiple fundamentally incompatible subgroups. Nearly everyone is essentially undergoing constant torture, because of a strange, unfixable psychological quirk that creates a powerful aversion to certain other types of people (who are all around them).

If the only alternative to that dystopian future (besides human extinction) is to exterminate some subgroup of humanity, then that creates a dilemma: torture vs. genocide. My inclination is that near-universal misery is worse than extinction, and extinction is worse than genocide.

And that seems to be where this hypothetical is headed, if you keep applying "least convenient possible world" and ruling out all of the preferable potential alternatives (like separating the groups, or manipulating either group's genes/brains/noses to stop the aversive feelings). If you keep tailoring a hypothetical so that the only options are mass suffering, genocide, and human extinction, then the conclusion is bound to be pretty repugnant. None of those bullets are particularly appetizing but you'll have to chew on one of them. Which bullet to bite depends on the specifics; as the degree of misery among the aversion-sufferers gets reduced from torture-levels towards insignificance at some point my preference ordering will flip.

0Pentashagon12y
I noticed something similar in another comment. CEV must compare the opportunity cost of pursuing a particular terminal value at the expense of all other terminal values, at least in a universe with constrained resources. This leads me to believe that CEV will suggest that the most costly (in terms of utility opportunity lost by choosing to spend time fulfilling a particular terminal value instead of another) terminal value be abandoned until only one is left and we become X maximizers. This might be just fine if X is still humane, but it seems like any X will be expressible as a conjunction of disjunctions and any particular disjuctive clause will have the highest opportunity cost and could be removed to increase overall utility, again leading to maximizing the smallest expressible (or easiest to fulfill) goal.
0Bruno_Coelho12y
Classical failed scenarios. Great morphological/structural changes need legal constraints to don't become very common, or be risk-averse, to prevent the creation of inumerous subgroups with alien values. But contra this, subgroups could go astray far enough to don't be caugth, and make whatever change they want, even creating new subgroups to torture or kill. In this case specifically, I assume we have to deal with this problem before structural changes become common.

This looks like an extension of Yvain's post on offense vs. harm-minimization, with Jews replacing salmon and unchangeable Nazis replacing electrode-implanted Brits.

The consequentialist argument, in both cases, is that if a large group of people are suffering, even if that suffering is based on some weird and unreasonable-seeming aversion, then indefinitely maintaining the status quo in which that large group of people continues to suffer is not a good option. Depending how you construct your hypothetical scenario, and how eager your audience is to play along, you can rule out all of the alternative courses of action except for ones that seem wrong.

The assumption "their terminal values are fixed to hate group X" is something akin to "this group is not human, but aliens with an arbitrary set of values that happen to mostly coincide with traditional human values, but with one exception." Which is not terribly different from "These alien race enjoys creativity and cleverness and love and other human values... but also eats babies."

Discussion of human morality only makes sense when you're talking about humans. Yes, arbitrary groups X and Y may, left to their own devices, find it rational to do all kinds of things we find heinous, but then you're moving away from morality and into straight up game theory.

7TimS12y
Descriptively true, but some argument needs to be made to show that our terminal values never require us to consider any alien's preferences. Preferably, this argument would also address whether animal cruelty laws are justified by terminal values or instrumental values.
0Ghatanathoah12y
I don't think the argument is that. It's more like our terminal values never require us to consider a preference an alien has that is radically opposed to important human values. If we came across an alien race that, due to parallel evolution, has values that coincide with human values in all important ways, we would be just as obligated to respect their preferences as we would those of a human. If we ran across an alien race whose values were similar in most respects, but occasionally differed in a few important ways, we would be required to respect their preferences most of the time, but not when they were expressing one of those totally inhuman values. In regard to animal cruelty, "not being in pain" is a value both humans and animals have in common, so it seems like it would be a terminal value to respect it.
0TimS12y
That's certainly how we behave. But is it true? Why? Edit: If your answer is "Terminal value conflicts are intractable," I agree. But that answer suggests certain consequences in how society should be organized, and yet modern society does not really address actual value conflicts with "Purge it with fire." Also, the word values in the phrases "human values" and "animal values" does not mean the same thing in common usage. Conventional wisdom holds that terminal values are not something that non-human animals have - connotatively if not denotatively.
0Ghatanathoah12y
I think I might believe that such conflicts are intractable. The reason that society generally doesn't flat-out kill people with totally alien values is that such people are rare-to-nonexistant. Humans who are incurably sociopathic could be regarded as creatures with alien values, providing their sociopathy is egosyntonic. We do often permanently lock up or execute such people. You might be right, if you define "value" as "a terminal goal that a consequentialist creature has" and believe most animals do not have enough brainpower to be consequentialists. If this is the case I think that animal cruelty laws are an probably an expression of the human value that creatures not be in pain
4shminux12y
Are you saying that immutable terminal values is a non-human trait?
7novalis12y
With respect to group-based hatred, it seems that there have been changes in both directions over the course of human history (and change not entirely caused by the folks with the old views dying off). So, yeah, I think your Nazis aren't entirely human.
2DanielLC12y
Those baby-eating aliens produce large net disutility, because the babies hate it. In that case, even without human involvement, it's a good idea to kill the aliens. To make it comparable, the aliens have to do something that wouldn't be bad if it didn't disgust the humans. For example, if they genetically modified themselves so that the babies they eat aren't sentient, but have the instincts necessary to scream for help.
1andrew sauer3y
This situation is more like "they eat babies, but they don't eat that many, to the extent that it produces net utility given their preferences for continuing to do it."

Isn't it ODD that in a world of Nazis and Jews, me who is neither is being asked to make this decision? If I were a Nazi, I'm sure what my decision is going to be. If I were a Jew, I'm sure what my decision is going to be.

Actually, now that I think about it, this will be a huge problem if and when humanity, in need of new persons to speak to, decides to uplift animals. It is an important question to ask.

4komponisto12y
Inspired by this comment, here's a question: what would the CEV of the inhabitants of shminux's hypothetical world look like?
8ArisKatsaris12y
There's obviously no coherence if the terminal values of space-Jews include their continuing existence, and the terminal values of space-Nazis include the space-Jews' eradication.
1komponisto12y
So what does the algorithm do when you run it?
0ArisKatsaris12y
Prints out "these species' values do not cohere"? Or perhaps "both species coherent-extrapolatedly appreciate pretty sunsets, therefore maximize prettiness of sunsets, but don't do anything that impacts on the space-Jews survival one way or another, or the space-Nazis survival either if that connects negatively to the former?"
0zerker200012y
Return a "divide by zero"-type error, or send your Turing machine up in smoke trying.
5shminux12y
Note that the CEV must necessarily address contradicting terminal values. Thus an FAI is assumed to be powerful enough to affect people's terminal values, at least over time. For example, (some of the) Nazis might be OK with not wanting Jews dead, they are just unable to change their innate Jewphobia. An analogy would be people who are afraid of snakes but would not mind living in a world where snakes are non-poisonous (and not dangerous in any other way) and they are not afraid of them.
0Pentashagon12y
It would probably least-destructively turn the jews into nazis or vice versa; e.g. alter one or the other's terminal values such that they were fully compatible. After all, if the only difference between jews and nazis is the nose, why not ask the jews to change the nose and gain an anti-former-nose preference (theoretically the jews would gain utility because they'd have a new terminal value they could satisfy). Of course this is a fine example of how meaningless terminal values can survive despite their innate meaningless; the nazis should realize the irrationality of their terminal value and simply drop it. But will CEV force them to drop it? Probably not. The practical effect is the dissolution of practical utility; utility earned from satisfying an anti-jew preference necessarily reduces the amount of utility attainable from other possible terminal values. That should be a strong argument CEV has to convince any group that one of their terminal values can be dropped, by comparing the opportunity cost of satisfying it to the benefit of satisfying other terminal values. This is even more of a digression from the original question, but I think this implies that CEV may eventually settle on a single, maximally effective terminal value.
0[anonymous]12y
I think CEV is supposed execute a controlled shutdown in that kind of situation and helpfully inform the operators that they live in a horrible, horrible world.
0Bruno_Coelho12y
I suspect the names of groups make the framework of problem a bit misleading. Probably if framed in terms of groups A and B could clear the evaluation.
2blogospheroid12y
I just followed the naming convention of the post. There is already a thread where the naming is being disputed starting with Alicorn's comment on venusians and neptunians. As I understand, the naming is to bring near mode thinking right into the decision process and disrupt what would have otherwise been a straightforward utilitarian answer - if there are very few jews and billions of nazis, exterminate the jews.

It is always rational for the quasi-Nazis to kill the quasi-Jews, from the Nazi perspective. It's just not always rational for me to kill the Jews - just because someone else wants something, doesn't mean I care.

But if I care about other people in any concrete way, you could modify the problem only slightly in order to have the Nazis suffer in some way I care about because of their hatred of the Jews. In which case, unless my utility is bounded, there is indeed some very large number that corresponds to when it's higher-utility to kill the Jews than to do nothing.

Of course, there are third options that are better, and most of them are even easier than murder, meaning that any agent like me isn't actually going to kill any Jews, they'll have e.g. lied about doing so long before.

One of many utilitarian conundrums that are simply not my problem, not being a utilitarian.

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What t

... (read more)

I suspect what you mean by desire utilitarianism is what wikipedia calls preference utilitarianism, which I believe is the standard term.

0shminux12y
Possibly. I was using the term I found online in relation to the 1000 Sadists problem, and I did not find this or similar problem analyzed on Wikipedia. Maybe SEP has it?
0CronoDAS12y
I don't know if the "1000 Sadists problem" is the common term for this scenario, it's just one I've seen used in a couple of places.
0CronoDAS12y
"Desire utilitarianism" is a term invented by one Alonzo Fyfe and it isn't preference utilitarianism. It's much closer to "motive utilitarianism".
0shminux12y
Someone ought to add a few words about it to Wikipedia.
0Eneasz12y
It was tried a couple years back, Wikipedia shut down the attempt.

Of course I wouldn't exterminate the Jews! I'm a good human being, and good human beings would never endorse a heinous action like that. Those filthy Nazis can just suck it up, nobody cares about their suffering anyway.

The mistake here is in saying that satisfying the preferences of other agents is always good in proportion to the number of agents whose preference is satisfied. While there have been serious attempts to build moral theories with that as a premise, I consider them failures, and reject this premise. Satisfying the preferences of others is only usually good, with exceptions for preferences that I strongly disendorse, independent of the tradeoffs between the preferences of different people. Also, the value of satisfying the same preference in many people grows sub-linearly with the number of people.

Hm.

I suppose, if LW is to be consistent, comments on negatively voted posts should incur the same karma penalty that comments on negatively voted comments do.

0Oscar_Cunningham12y
shminux claims (in an edit to the post) that they do. Do they or not?
1shminux12y
I don't actually know, I simply assumed that this would be the case for posts as well as comments.

How important is the shape of the noses to the jewish people?

Consider a jew is injured in an accident and the best reconstruction that is present restores the nose to a nazi shape and not a jew one. How would his family react? How different will be his ability to achieve his life's goals and his sense of himself?

How would a nazi react to such a jew?

If the aspect of the Jews that the Nazis have to change is something integral to their worldview, then a repugnant conclusion becomes sort of inevitable.

Till then, pull on the rope sideways. Try to save as many people as possible.

0NancyLebovitz10y
In the real world, Nazis believed that Jews were inimical to Aryans, and treacherous as well. Jews that didn't look like Jews were still considered to be threats.

First, I'm going to call them 'N' and 'J', because I just don't like the idea of this comment being taken out of context and appearing to refer to the real things.

Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it's so big that you run into a number of practical problems first. I'm going to run through as many places where this falls down in practice as I can, even if others have mentioned some.

  • The assumption that if you leave J fixed and increase N, th
... (read more)
0DanArmak12y
How do you actually define the correct proportion, and measure the relevant parameters?
1Irgy12y
The funny thing is the point of my post was the long explanation of practical problems, yet both replies have asked about the "in theory yes" part. The point of those three words was to point out that the statements followed are despite my own position in the torture/dust specks issue. As far as your questions go, I along with, I expect, the rest of the population of planet Earth have close to absolutely no idea. Logically deriving the theoretical existance of something does not automatically imbue you with the skills to calculate its precise location. My only opinion is that the number is significantly more than the "billions of N and handful of J" mentioned in the post, indeed more than will ever occur in practice, and substantially less than 3^^^^^3.
0DanArmak12y
How do you determine your likelihood that the number is significantly more than billions vs. a handful - say, today's population of Earth against one person? If you have "close to absolutely no idea" of the precise value, there must be something you do know to make you think it's more than a billion to one and less than 3^^^^^3 to one. This is a leading question: your position (that you don't know what the value is, but you believe there is a value) is dangerously close to moral realism...
0Irgy12y
So, I went and checked the definition of "moral realism" to understand why the term "dangerously" would be applied to the idea of being close to supporting it, and failed to find enlightenment. It seems to just mean that there's a correct answer to moral questions, and I can't understand why you would be here arguing about a moral question in the first place if you thought there was no answer. The sequence post The Meaning of Right seems to say "capable of being true" is a desirable and actual property of metaethics. So I'm no closer to understanding where you're going with this than before. As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process. I hope you can understand that I don't claim to have no idea about morality in general, just about the exact number of grains of rice on a football field. Especially since I don't know the size of the grains of rice or the code of football either.
0DanArmak12y
Moral realism claims that: Moral realists have spilled oceans of ink justifying that claim. One common argument invents new meanings for the word "true" ("it's not true the way physical fact, or inductive physical law, or mathematical theorems are true, but it's still true! How do you know there aren't more kinds of truth-ness in the world?") They commit, in my experience, a multitude of sins - of epistemology, rationality, and discourse. I asked myself: why do some people even talk about moral realism? What brings this idea to their minds in the first place? As far as I can see, this is due to introspection (the way their moral intuitions feel to them), rather than inspection of the external world (in which the objective morals are alleged to exist). Materialistically, this approach is suspect. An alien philosopher with different, or no, moral intuitions would not come up with the idea of an objective ethics no matter how much they investigated physics or logic. (This is, of course, not conclusive evidence on its own that moral realism is wrong. The conclusive evidence is that there is no good argument for it. This merely explains why people spend time talking about it.) Apart from being wrong, I called moral realism dangerous because - in my personal experience - it is correlated with motivated, irrational arguments. And also because it is associated with multiple ways of using words contrary to their normal meaning, sometimes without making this clear to all participants in a conversation. As for Eliezer, his metaethics certainly doesn't support moral realism (under the above definition). A major point of that sequence is exactly that there is no purely objective ethics that is independent of the ethical actor. In his words, there is no universal argument that would convince "even a ghost of perfect emptiness". However, he apparently wishes to reclaim the word "right" or "true" and be able to say that his ethics are "right". So he presents an argument that t
0Irgy12y
Well, this seems to be a bigger debate than I thought I was getting into. It's tangential to any point I was actually trying to make, but it's interesting enough that I'll bite. I'll try and give you a description of my point of view so that you can target it directly, as nothing you've given me so far has really put much of a dent in it. So far I just feel like I'm suffering from guilt by association - there's people out there saying "morality is defined as God's will", and as soon as I suggest it's anything other than some correlated preferences I fall in their camp. Consider first the moral views that you have. Now imagine you had more information, and had heard some good arguments. In general your moral views would "improve" (give or take the chance of specifically misrepresentative information or persuasive false arguments, which in the long run should eventually be cancelled out by more information and arguments). Imagine also that you're smarter, again in general your moral views should improve. You should prefer moral views that a smarter, better informed version of yourself would have to your current views. Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality. This "existance" is not the same as being somehow "woven into the fabric of the univers". Aliens could not discover it by studying physics. It "exists", but only in the sense that Aleph 1 exists or "the largest number ever to be uniquely described by a non-potentially-self-referential statement" exists. If I don't like what it says, that's by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views (I'm referring here to one of Eliezer's criticisms of moral realism). So, if I bravely assume you accept that this limit exists,
0DanArmak12y
I'll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it's really big. Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman's limit. So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative? You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations. You can't just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as "the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don't like this requirement, it's by definition because you're misinformed or stupid. The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has "information about morals". Morals are just a kind of preferences. You can only have information about some particular person's morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn't mean your and my morals are the same. Your argument is circular. Well, first of all, that's not how everyone else uses the word morals. Normally we would say that your morals are to do what's best for everyone; while my morals are something else. Calling your personal morals "simply morals", is equivalent to saying that my (different) morals shouldn't be called by the name morals or even "Daniel's m
0Irgy11y
We obviously have a different view on the subjectivity of morals, no doubt an argument that's been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them. To me, subjective morals like you talk about clearly exist, but I don't see them as interesting in their own right. They're just preferences people have about other people's business. Interesting for the reasons any preference is interesting but no different. The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes "better" and "worse" being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism. I accept that it's used for the subjective type as well, but personally I save the use of the word "moral" for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place - the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don't agree that people having moral debates are simply comparing their subjective views (which sounds to me like "Gosh, you like fish? I like fish too!"), they're arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it's them, but you know what I mean). This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never r
0V_V12y
Real Ns would disagree. They did realize that killing Js wasn't exactly a nice thing to do. At first they considered relocating Js to some remote land (Madagascar, etc.). When it became apparent thar relocating millions while fighting a world war wasn't feasible and they resolved to killing them, they had to invent death camps rather than just shooting them because even the SS had problems doing that. Nevertheless, they had to free the Lebensraum to build the Empire that would Last for a Thousand Years, and if these Js were in the way, well, too bad for them. Ends before the means: utilitarianism at work.
7Irgy12y
I don't see why utilitarianism should be held accountable for the actions of people who didn't even particulalry subscribe to it. Also, why are you using N and J to talk about actual Nazis and Jews? That partly defeats the purpose of my making the distinction.
0V_V12y
They may have not framed the issue explicitely in terms of maximization of an aggregate utility function, but their behavior seems consistent with consequentialist moral reasoning.
2Irgy12y
Reversed stupidity is not intelligence. That utilitarianism is dangerous in the hands of someone with a poor value function is old news. The reasons why utilitarianism may be correct or not exist in an entirely unrelated argument space.
4Nick_Tarleton12y
click the "Show help" button below the comment box
2Irgy12y
Ugh so obvious, except I only looked for the help in between making edits, looking for a global thing rather than the (more useful most of the time) local thing. Thanks!
2prase12y
Why is that relevant? Real Ns weren't good rationalists after all. If the existence of Js really made them suffer (which it most probably didn't under any reasonable definition of "suffer") but they realised that killing Js has negative utility, there were still plenty of superior solutions, e.g.: (1) relocating the Js afer the war (they really didn't stand in the way), (2) giving all or most Js a new identity (you don't recognise a J without digging into birth certificates or something; destroying these records and creating strong incentives for the Js to be silent about their origin would work fine), (3) simply stopping the anti-J propaganda which was the leading cause of hatred while being often pursued for reasons unrelated to Js, mostly to foster citizens loyalty to the party by creating an image of an evil enemy. Of course Ns could have beliefs, and probably a lot of them had beliefs, which somehow excluded these solutions from consideration and therefore justified what they actually did on utilitarian grounds. (Although probably only a minority of Ns were utilitarians). But the original post wasn't pointing out that utilitarianism could fail horribly when combined with false beliefs and biases. It was rather about the repugnant consequences of scope sensitivity and unbounded utility, even when no false beliefs are involved.
0DanArmak12y
What definition is that?
1prase12y
That clause was meant to exclude the possibility of claiming suffering whenever one's preferences aren't satisfied. As I have written 'any reasonable', I didn't have one specific definition in mind.

If the Nazis have some built-in value that determines that they hate something utterly arbitrary, then why don't we exterminate them?

7shminux12y
It is certainly an option, but if there are enough Nazis, this is a low-utility "final solution" compared to the alternatives.
-1Dallas12y
In a void where there are just these particular Nazis and Jews, sure, but in most contexts, you'll have a variety of intelligences with varying utility functions, and those with pro-arbitrary-genocide values are dangerous to have around. Of course, there is the simple alternative of putting the Nazis in an enclosed environment where they believe that Jews don't exist. Hypotheticals have to be really strongly defined in order to avoid lateral thinking solutions.
3[anonymous]12y
I am pretty sure that certain kinds of societies and minds are possible that while utterly benign and quite happy would cause 21st century humans to want to exterminate them and suffer greatly as long as it was known they existed.
0[anonymous]12y
This may come as news, but all kinds of hating or loving something are utterly arbitrary.
-2DanArmak12y
Some kinds are a lot less arbitrary than others: for instance, being strongly influenced by evolution, rather than by complex contigent history.
3[anonymous]12y
You do realize modern Western societies rejection of plenty of kinds of hating or loving that are strongly influenced by evolution are due to its complex contingent history no?
0DanArmak12y
Yes. And other kinds of hating or loving or hating-of-loving are influenced more by evolution, e.g. the appearance of covert liaisons and jealousy in societies where such covertness is possible. Or the unsurprising fact that humans generally love their children and are protective of them. I never said no kinds of loving or hating are arbitrary (or at least determined by complex contigent history). I do say that many kinds are not arbitrary. (My previous comment seems to be incomplete. Some example is missing after "for instance", I probably intended to add one and forgot. This comment provides the example.)

That can be interpreted a couple of ways.

What if I place 0 value (or negative value (which is probably what I really do, but what I wish I did was to put zero value on it)) on the kind of satisfaction or peace of mind the Nazis get from knowing the Jews are suffering?

0shminux12y
Interesting. I am not sure if one can have a consistent version of utilitarianism where one unpacks the reasons for one's satisfaction and weigh them separately.
[-][anonymous]8y00

Relevant: Could Nazi Germany seeding the first modern anti-tobacco movement have resulted in an overall net gain in public utility till date?

0gjm8y
Is there any reason to think that the Nazi's anti-smoking campaign actually influenced later ones in Germany or elsewhere very much? (I think there are much stronger candidates for ways in which the Nazis produced good as well as harm -- e.g., scientific progress motivated by WW2. But there's a lot of harm to weigh against.)

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

2Incorrect12y
They aren't adding qualia, they are adding the utility they associate with qualia.
0Wrongnesslessness12y
It is not a trivial task to define a utility function that could compare such incomparable qualia. Wikipedia: Has it been shown that this is not the case for dust specks and torture?
5benelliott12y
In the real world, if you had lexicographic preferences you effectively wouldn't care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there's always a chance of affecting something.
0Wrongnesslessness12y
I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled. I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qualitatively different life experiences. Since I do not really know how to optimize for any of this, I'm not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I'm given a choice, it's quite clear what my choice will be.
0benelliott12y
I don't won't to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences. What do you mean you don't know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren't as effective as they could be they almost certainly don't have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all. You've also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not "do I prefer an FAI to any number of true friends" but "do I prefer a single true friend to any chance of an FAI, however small", in which case the answer, for me at least, seems to be no.
1TheOtherDave12y
I'm not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment. When I try to come up with real-world examples of lexicographic preferences, it's pretty clear to me that I'm rounding... that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering. But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.
0Wrongnesslessness12y
I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example. It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.
0TheOtherDave12y
It seems our introspective accounts of our mental processes are qualitatively different, then. I'm willing to take your word for it that your experience of long unbearable torture cannot be "quantified" in terms of minor discomforts. If you wish to argue that mine can't either, I'm willing to listen.

If the Nazis are unable to change their terminal values, then Good|Nazi has a substantial difference compared to what we mean when we say Good. Nazis might use the same word, or it might translate as "the same." It might even be similar along many dimensions. Good|Jew might be the same as Good (they don't seem substantially different then humans) although this isn't required by the problem, but Good|Nazi ends up being something that I just don't care about in the case where we are talking about exterminating Jews.

There might be other conditions w... (read more)

0prase12y
You indeed needn't care about "good|Nazi", but the important question in this hypothetical is whether you care about "happy|Nazi" or "suffer|Nazi". I don't care much whether the outcome is considered good by someone else, the less so if that person is evil, but still it could bother me if the outcome causes that person to suffer.
2asparisi12y
I don't particularly want "suffer|Nazi" at least in and of itself. But it works out the same way. A mosquito might suffer from not drinking my blood. That doesn't mean I will just let it. A paperclip maximizer might be said to suffer from not getting to turn the planet into paperclips, if it were restrained. If the only way to end suffer|Nazi is to violate what's Good, then I am actually pretty okay with suffer|Nazi as an outcome. I'd still prefer ((happy|Nazi) & Good) to ((suffer|Nazi) & Good), but I see no problem with ((suffer|Nazi) & Good) winning out over ((happy|Nazi) & Bad). My preference for things with differing value systems not to suffer does not override my value system in and of itself.

You know... purposely violating Godwin's Law seems to have become an applause light around here, as if we want to demonstrate how super rational we are that we don't succumb to obvious fallacies like Nazi analogies.

1drethelin12y
Godwin's law: Not an actual law
3anonymous25912y
Or actually: a "law" in the sense of "predictable regularity", not "rule that one will be punished for violating". In which case the post exemplifies it, rather than violating it.

One idea that I have been toying since I read Eliezer's various posts on the complexity of value is that the best moral system might not turn out to be about maximizing satisfaction of any and all preferences, regardless of what those preferences are. Rather, it would be about increasing the satisfaction of various complex, positive human values, such as i.e. "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc." If this is the case then it may well... (read more)

0shminux12y
Of course, if you have the option of lying, the problem becomes trivial and uninteresting, regardless of your model of the Nazi psyche. It's when your choice requires to improve the life of one group at the expense of another one suffering, you tend to face a repugnant conclusion.
0Ghatanathoah12y
In the original framing of the thought experiment the reason lying wasn't an option was because the Nazis didn't want to believe that all the Jews were dead, they wanted the Jews to really be dead. So if you lied to them you wouldn't really be improving their lives because they wouldn't really be getting what they wanted. By contrast, if the Nazis simply feel intense emotional pain at the knowledge that Jews exist, and killing Jews is an instrumental goal towards preventing that pain, then lying is the best option. You're right that that makes the problem trivial. The reason I addressed it at all was that my original thesis was "satisfying malicious preferences is not moral." I was afraid someone might challenge this by emphasizing the psychological pain and distress the Nazis might feel. However, if that is the case then the problem changes from "Is it good to kill people to satisfy a malicious preference?" to "Is it good to kill people to prevent psychological pain and distress. I still think that "malicious preferences are morally worthless" is a good possible solution to this problem, providing one has a sufficiently rigorous definition of "malicious."
1shminux12y
Maybe you misunderstand the concept of lying. They would really believe that all Jews are dead if successfully lied to, so their stress would decrease just as much as as if they all were indeed dead. This is more interesting. Here we go, the definitions: Assumption: we assume that it is possible to separate overall personal happiness level into components (factors), which could be additive, multiplicative (or separable in some other way). This does not seem overly restrictive. Definition 1: A component of personal happiness resulting from others being unhappy is called "malicious". Definition 2: A component of personal happiness resulting from others being happy is called "virtuous". Definition 3: A component of personal happiness that is neither malicious nor virtuous is called "neutral". Now your suggestion is that malicious components do not count toward global decision making at all. (Virtuous components possibly count more than neutral ones, though this could already be accounted for.) Thus we ignore any suffering inflicted on Nazis due to Jews existing/prospering. Does this sound right?
2Ghatanathoah12y
If this is the case then the Nazis do not really want to kill the Jews. What they really want to do is decrease their stress, killing Jews is just an instrumental goal to achieve that end. My understanding of the original thought experiment was that killing Jews was a terminal value for the Nazis, something they valued for its own sake regardless of whether it helped them achieve any other goals. In other words, even if you were able to modify the Nazi brains so they didn't feel stress at the knowledge that Jews existed, they would still desire to kill them. Yes, that's exactly the point I was trying to make, although I prefer the term "personal satisfaction" rather than "personal happiness" to reflect the possibility that there are other values then happiness.

What's more important to you, your desire to prevent genocide or your desire for a simple consistent utility function?

0shminux12y
I thought it was clear in my post that I have no position on the issue. I was simply illustrating that a "consistent utility function" leads to a repugnant conclusion.
0Incorrect12y
Sorry, generic you.

It is taking some effort to not make a sarcastic retort to this. Please refrain from using such absurdly politically-loaded examples in future. It damages the discussion.