shminux comments on [SEQ RERUN] Torture vs. Dust Specks - Less Wrong

4 Post author: MinibearRex 11 October 2011 03:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread.

Comment author: shminux 11 October 2011 04:36:28PM 5 points [-]

Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).

After digging through my inner utilities, the reason for my "obvious" choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.

In a world where a speck of dust in the eye is a BIG DEAL, because the life is otherwise so PERFECT, even one speck is noticed and not quickly forgotten, such occurrences can be accumulated and compared with torture. However, this was not specified in the original problem, so I assume that people live through the calamities of the speck of dust magnitude all the time, and adding one more changes nothing.

Comment author: Jack 12 October 2011 03:15:42AM 3 points [-]

Eliezer's question for you is "would you give one penny to prevent the 3^^^3 dust specks?"

Comment author: jhuffman 11 October 2011 07:05:55PM 0 points [-]

I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn't output the wrong answer on corner cases like this.

Comment author: Jack 11 October 2011 09:23:05PM 7 points [-]

No. No, that is not the purpose of the article.

Comment author: jhuffman 12 October 2011 01:52:01PM 0 points [-]

Sorry I've read that and still don't know what it is that I've got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?

Comment author: MinibearRex 11 October 2011 09:31:22PM 2 points [-]
Comment author: shminux 11 October 2011 09:50:36PM *  1 point [-]

His point of view is

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.

whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.

The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.

Comment author: MinibearRex 12 October 2011 03:42:11AM 1 point [-]

If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven't seen any full discussion of it.

Comment author: ArisKatsaris 11 October 2011 08:09:54PM 0 points [-]

The wrong answer is the people who prefer the specks, because that's the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).

Comment author: see 11 October 2011 11:51:19PM 3 points [-]

Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.

For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn't matter, because it isn't going to happen.

Comment author: Jack 11 October 2011 09:29:07PM 6 points [-]

Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm-- if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the "wrong answer" as far as the author is concerned.

Comment author: ArisKatsaris 11 October 2011 10:06:26PM 0 points [-]

Did the people choosing "specks" ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose "specks"?

Comment author: Jack 11 October 2011 10:25:59PM *  3 points [-]

Most people I didn't, I suppose-- they were asked:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

Which isn't the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That's a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.

In any case it's just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?

Comment author: ArisKatsaris 11 October 2011 10:37:04PM 0 points [-]

do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?

Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn't truly modify them.

Any choice isn't really just about that particular choice, it's about the mechanism you use to arrive at that choice. If people believe that it doesn't matter how many people they each inflict tiny disutilities on, the world ends up worse off.

Comment author: Jack 11 October 2011 10:48:33PM *  11 points [-]

The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.

For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.

Easy. Everyone should have the same answer.

But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.

This is exactly what you're doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.

Comment author: ArisKatsaris 11 October 2011 08:07:20PM *  -1 points [-]

And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words "NOT A BIG DEAL", and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind -- are they collectively still not a big deal then?

If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?

Comment author: Jack 11 October 2011 09:35:43PM 7 points [-]

If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.

Comment author: shminux 11 October 2011 08:15:33PM *  5 points [-]

which leave you (and entire multiverses of beings) effectively blind

I stipulated "noticed and not quickly forgotten" would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.

Comment author: ArisKatsaris 11 October 2011 08:30:24PM 0 points [-]

But each individual dust speck wouldn't be noticeable, and that's each individual agent decides to add - an individual dust speck to the life of each such victim.

So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?

Comment author: shminux 11 October 2011 09:18:36PM *  0 points [-]

My personal decision theory has no problems dismissing noise-level influences, because they do not matter.

You keep trying to replace the original problem with your own: "how many sand specks constitute a heap?" This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.

Comment author: ArisKatsaris 11 October 2011 09:25:31PM *  0 points [-]

no heap is ever formed for anyone of the 3^^^3 people.

That's not one of the guarantees you're given, that a trillion other agents won't be given similar choices. You're not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don't have the guarantee that the decisions of a trillion different agents won't pile up.

Comment author: shminux 11 October 2011 09:37:10PM *  2 points [-]

Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT'S IT, while you say "you don't have the guarantee that the decisions of a trillion different agents won't pile up".

My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.

Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.