I am not wholly unsympathetic to the many commenters in Torture vs. Dust Specks who argued that it is preferable to inflict dust specks upon the eyes of 3^^^3 (amazingly huge but finite number of) people, rather than torture one person for 50 years.  If you think that a dust speck is simply of no account unless it has other side effects - if you literally do not prefer zero dust specks to one dust speck - then your position is consistent.  (Though I suspect that many speckers would have expressed a preference if they hadn't known about the dilemma's sting.)

So I'm on board with the commenters who chose TORTURE, and I can understand the commenters who chose SPECKS.

But some of you said the question was meaningless; or that all morality was arbitrary and subjective; or that you needed more information before you could decide; or you talked about some other confusing aspect of the problem; and then you didn't go on to state a preference.

Sorry.  I can't back you on that one.

If you actually answer the dilemma, then no matter which option you choose, you're giving something up.  If you say SPECKS, you're giving up your claim on a certain kind of utilitarianism; you may worry that you're not being rational enough, or that others will accuse you of failing to comprehend large numbers.  If you say TORTURE, you're accepting an outcome that has torture in it.

I falsifiably predict that of the commenters who dodged, most of them saw some specific answer - either TORTURE or SPECKS - that they flinched away from giving.  Maybe for just a fraction of a second before the question-confusing operation took over, but I predict the flinch was there.  (To be specific:  I'm not predicting that you knew, and selected, and have in mind right now, some particular answer you're deliberately not giving.  I'm predicting that your thinking trended toward a particular uncomfortable answer, for at least one fraction of a second before you started finding reasons to question the dilemma itself.)

In "bioethics" debates, you very often see experts on bioethics discussing what they see as the pros and cons of, say, stem-cell research; and then, at the conclusion of their talk, they gravely declare that more debate is urgently needed, with participation from all stakeholders.  If you actually come to a conclusion, if you actually argue for banning stem cells, then people with relatives dying of Parkinson's will scream at you.  If you come to a conclusion and actually endorse stem cells, religious fundamentalists will scream at you.  But who can argue with a call to debate?

Uncomfortable with the way the evidence is trending on Darwinism versus creationism?  Consider the issue soberly, and decide that you need more evidence; you want archaeologists to dig up another billion fossils before you come to a conclusion.  That way you neither say something sacrilegious, nor relinquish your self-image as a rationalist.  Keep on doing this with all issues that look like they might be trending in an uncomfortable direction, and you can maintain a whole religion in your mind.

Real life is often confusing, and we have to choose anyway, because refusing to choose is also a choice.  The null plan is still a plan.  We always do something, even if it's nothing.  As Russell and Norvig put it, "Refusing to choose is like refusing to allow time to pass."

Ducking uncomfortable choices is a dangerous habit of mind.  There are certain times when it's wise to suspend judgment (for an hour, not a year).  When you're facing a dilemma all of whose answers seem uncomfortable, is not one of those times!  Pick one of the uncomfortable answers as the best of an unsatisfactory lot.  If there's missing information, fill in the blanks with plausible assumptions or probability distributions.  Whatever it takes to overcome the basic flinch away from discomfort.  Then you can search for an escape route.

Until you pick one interim best guess, the discomfort will consume your attention, distract you from the search, tempt you to confuse the issue whenever your analysis seems to trend in a particular direction.

In real life, when people flinch away from uncomfortable choices, they often hurt others as well as themselves.  Refusing to choose is often one of the worst choices you can make.  Motivated continuation is not a habit of thought anyone can afford, egoist or altruist.  The cost of comfort is too high.  It's important to acquire that habit of gritting your teeth and choosing - just as important as looking for escape routes afterward.

New to LessWrong?

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 6:07 AM

I'm pretty sure I wasn't doing that. ie, I did, given certain assumptions, commit to SPECKS in my reply.

For the record, my current view is if the choice is between torture vs single speck event total per person for bignum people, I'd go with the SPECKS

I do not consider the situation as linear, however. ie, two dust specks for one person is not precisely twice as bad as a single dust speck in one person, nor is that exactly as bad as two people each experiencing a single dust speck. In fact, I'd suspect that it'd be reasonable to consider a single dust speck per person total has a finite disutility even in the limiting case of infinite people.

If the situation instead is "torture vs an additional dust speck per person for bignum people" then I'd want to know how many dust specks per person were already allocated, and as that number increased from 0, I'd probably lean a bit more toward TORTURE. But, of course, I know there'd have to be some value after which it'd really make no difference to add an additional dust speck or not, so back to SPECKS.

If I couldn't obtain that information, then I'd at least want to know how many others are going to be asked this. ie, is this isolated, or are there going to be some number of people "tested" like this such that if all answered SPECKS, then the result would be effectively worse than the TORTURE option, then, well, if I knew how many would be asked, and how many saying yes it would take, and if I knew some statistical properties of their utility functions and so on, then effectively I'd choose randomly, but setting the probability for the choice such that the expected utility for the outcome under the assumption that everyone used that heuristic would be maximized. (This is assuming direct communication between all the askeees isn't an option and so on. if it is, then that random heuristic wouldn't be needed)

If even that option was disallowed, well, I'd have to estimate based on whatever distribution of possibilities for each of those things that represented my current (At the time) state of knowledge.

THIS is the point at which I get a bit stumped. If we say though "you have to make a decision, make it right now, even if it isn't that great" I'm still going to go with SPECKS, though, admitedly, with far less confidence that it's correct than what I said above.

Of course, now that I have a fallback last choice given no furthere knowledge/ability to consider, doing something about the whole situation that set up this issue would be something to investigate heavily. Also, I'd want to be developing a better model of exactly how to measure amount of effective suffering per "unit" suffering. I suspect it'd be some function of that plus how much it interferes with/overflows other possible states, etc etc etc.

As far as your overall point about people avoiding the decision, well, while it may be wise to avoid the habit of hiding from any uncomfortable decision, this is a bit different. I really can't see asking for a bit more information in the context of an edge case that was constructed to prod at our normal decision making methods and that was asked as a hypothetical thought experiment, AND was a type of situation that I'd consider to be incredibly insanely mindexplodingly unlikely to pop up in Real Life(tm) any time soon as entirely unreasonable.

(chuckles on a meta level though, I just noticed that I seem to have chosen all possible options: commit to a specific choice, blabber about confusing aspects, ask for more information, and attempted to justify not commiting to a specific choice. There must be some sort of prize for this. :D)

Eliezer, Thomas Scanlon discusses this issue in the 'Aggregation' Section of Chapter 5 of his What We Owe To Each Other. Philosophers have been on it for awhile.

I deny that I have any obligation to choose now based on the available information.

What happens if I choose torture and somebody gets tortured for 49 years and then dies of natural causes? Do we get all the dust specs too? Have we gotten rid of 98% of them?

Can I ask for a volunteer from among 3^^^3 people, or do I have to take pot luck?

Do I have to choose right now? Do all 3^^^3 people get their dust specks the moment I decline to choose, or do they get them after the 50 years are up? If the former, what happens if I agree to the torture and then change my mind right after the dust specks didn't happen?

How do I know any of this is true? What if I get somebody tortured for 50 years and then the dust specks happen anyway? What if I do it and it turns out there aren't 3^^^3 people in the universe and it was all for nothing? Why should I take somebody's word about this?

Maybe we could start small. I could volunteer to be tortured for 50 years * 7 billion / 3^^^3 and stop the dust specks from everybody in this one world. I'd volunteer for that in a new york second.

I'm continually faced with choices for myself where the background is quite unclear. Take the contract, and maybe things go bad and it hurts my professional reputation. Wait for a better one and the money is late. Etc. And I make choices where the results won't show up in my lifetime. Throw mercury batteries in the trash or keep them around and wait for a chance to dispose of them properly. Drive my car to the store or wait a day and combine it with several other trips. Beyond the inconvenience balanced against the money, extra gas burned will have a small effect on billions of people over the next four or so generations. Maybe more than an eyeblink. I don't know how to quantify those effects and I don't spend a lot of thought on them. The immediate effects are easier to find out about, so I put most of my thought into those.

Show me the 3^^^3 people and I'll give them due consideration. Until then it's a thought experiment and I'll enjoy some time thinking about it.

Crocker's rules -- feel free to use me as an example if you think I gave a non-answer.

(I argued that the aggregation of a sufficient number of specks inflicted on a single person is equivalent to some significant length torture anyway.)

But there's a difference between refusing to choose in a situation that by its nature is necessarily hypothetical and trying not to choose in a real situation.

Until you pick one interim best guess, the discomfort will consume your attention, distract you from the search, tempt you to confuse the issue whenever your analysis seems to trend in a particular direction.

Oh no. Eliezer, I have disagreed with you at times, but you have not actually disappointed me until this moment. As an avid reader of yours, I beseech you, please think through this again.

You simply have not presented a moral dilemma. You've presented a pantomine; shadows on a wall; an illusion of a dilemma. If there's any dilemma here at all, it was whether I should play pretend-philosopher by giving an eloquent and vacuous response or else take philosophy and morals seriously by suggesting that your question is not yet ready to be answered. I chose the latter, partly because I also have been taking seriously your other writings-- the ones where you chide people for substituting wishful thinking for self-critical sober rational analysis. I'm attracted to the mind of a man who tries to live by a difficult and worthy principal, because that's what I do, too; and what I am doing.

Real moral dilemmas have context, and the secret to solving them always involves that context. We frequently find them in literature, richly expressed. Instead, you are just asking us to play a game with unspecified rules and goals. You toss off a scenario in a few sentences. How is that interesting? I guess it's a bit interesting to see how some people commenting have made bold assumptions and foisted unspoken premises on your example. It's a window onto their biases, maybe. Is that really enough to satisfy you?

I could understand if you don't want to make the effort to create a fully realized philosophical problem for us to work through (putting together those problems is a challenge). But geez, I'm surprised you would criticize me for doing what a philosopher is supposed to do: study the situation to understand the question better, rather than make a definite answer to a question I don't understand.

Oh no. Eliezer, I have disagreed with you at times, but you have not actually disappointed me until this moment.

You should find that of all the people you know, none of them seem beyond criticism; they will always fail to live up to your ideal of perfection. That's because there's only one person whose job it is to live up to that ideal.

Not trying to evade your substantive criticism, just a side note.

But geez, I'm surprised you would criticize me for doing what a philosopher is supposed to do: study the situation to understand the question better, rather than make a definite answer to a question I don't understand.

I never thought of myself as a philosopher. I just set out to debug the universe. I often have to do so using incomplete information. My motor actions do not have the luxury of vagueness, however I caveat my "answers". If you think my philosophical dilemmas are vague, you should see the problem descriptions Nature hands me.

The people who filled in their own assumptions and stated a preference were acting courageously; they exposed themselves to criticism for the conditional, if not for the assumptions.

I guess if you really feel the question is so confused as to be answerless, I'll accept that. I would still challenge you to fill in plausible assumptions and state a preference.

The philosophy of refusing to come to a conclusion is called skeptcism. The word skeptic comes from the Greek to examine. While I understand the need to make decisions, I'm not so sure that it should trump the desire to not accept answers (keep looking). As has been pointed out in earlier posts, once a decision is made it often is hard to dislodge. For example, many people today accept neo-Darwinism as an answer to evolution. Yet the evidence from biology would indicate that neo-Darwinism is either false or incomplete. (Try dislodging that one) So while I agree that one often has to make decisions quickly based on incomplete and conflicting evidence, I don't think the question you posed in 'torture vs. dust specks' was framed in such a way as to demand that type of decision.

By the way, someone who has made up their mind about religion or the existence of para-psychological phenomena is not a skeptic in the historical meaning of the word.

Yet the evidence from biology would indicate that neo-Darwinism is either false or incomplete.

Of course it's incomplete. No neodarwinist would have claimed it was complete.

Now that we know so much more than the neodarwinists did it's mostly of historical interest. But what we have now is still quite incomplete, and it will stay that way for the foreseeable future.

"I guess if you really feel the question is so confused as to be answerless, I'll accept that. I would still challenge you to fill in plausible assumptions and state a preference."

Remember the story, "The Lady and the Tiger"? The question was carefully formulated to be evenly balanced, to eliminate any reason to choose one over the other. Anything that got used to say one choice was better, implied that the story wasn't balanced quite right.

We could do that with your story too. If 3^^^3 people is enough to say it's better to torture one person, we could replace it with a smaller number, perhaps a googleplex. And if that's still too many we could try just a google. If people choose the specks we could increase the number of people, or maybe increase the number of specks.

At some point we get just the right number of specks to balance the torture for a modal number of people, and we're set. The maximum number of people will be unable to choose, because you designed it that way.

You did not say what happens if I don't choose. This is a glaring omission.

OK, let me tell one. You and your whole family have been captured by the Gestapo, and before they get down to the serious torture they decide to have some fun with you. They tell you that you have to choose, either they rape your daughter or your wife. If you don't choose which one then they'll rape them both. And you too.

Do you choose? If you refuse to choose then that's choosing for both of them to be raped. And you too.

But then, if you do choose, they rape them both anyway. And you too.

What should you do?

In the end, the crime is committed not by the person who has to choose between two presented evils, but by the person who sets up the choice. Choose the lesser of the evils, preferably with math, and then don't feel responsible.

[This comment is no longer endorsed by its author]Reply

Okay, I'll take a position: a moral dilemma involving impossibly huge numbers, perfect certainty, no externalities, and no context is no more deserving of a clear-cut answer than the question of how I would explain waking up with a blue tentacle.

When people can't explain themselves they often make up answers.

"What the hell? A blue tentacle?"

"I must have gotten it from a toilet seat."

J Thomas, if you can't see a better option, you tell them to rape your wife. Duh.

(No, I'm not a sociopath, I've just trained myself not to whine about my options, just pick the obviously best of a bad lot quickly, and keep looking for an escape route. The scenario is legitimate, people in real life have faced worse.)

Nick Tarleton, the problem with explaining waking up with a blue tentacle is that it's so low-probability as to destroy the worldview you would use to explain it; by Bayes, you shouldn't be able to explain it post facto unless you anticipate it to some measurable degree ante-facto. But a blue tentacle doesn't destroy your utility function, so asking "What would you do if you woke up with a blue tentacle?" is a perfectly legitimate dilemma.

When I read Eliezer's original post, my moral intuition crashed. I was confused, and suspected something was wrong with either the question, or with me.

Are you really suggesting that choosing to not commit to an answer immediately but to instead think about it and explore the scenario for a while was the wrong answer? If the scenario were instead "choose TORTURE or SPECKS within the next N seconds or get one at random," and was real, not a thought experiment, then see Eliezer's point: inaction is an action.

I say all morality is meaningless/arbitrary AND I choose torture. How do I stand with you? If I had said I would flip a coin, would that be satisfactory?

J Thomas- I'm not sure what your expertise- and this question is a little off post, but important to me and my personal biases, would you say the evidence today seems to indicate that the 'watchmaker' isn't blind? (maybe myopic...)

Eliezer: I don't think you read J Thomas carefully. He was saying, as far as I can tell from the last three sentences of his post, that the scenario itself strongly implies that you don't actually have the choice that it is asserted that you do have. As a hypothetical it fails. A person being tortured by the Gestapo is making a mistake to seriously consider the possibility that a supposed "choice" he is offered is anything but mockery and a part of his torture. Any person is making a mistake to seriously consider the possibility that his actions have any predictable impact on 3^^^3 other people because the chance of him being the one of those 3^^^3 people who was in the special position where he could effect the others rather than one of the others who could only be effected is simply too low.

"what would you do if your worldview had just been destroyed" is not, it seems to me, a legitimate question. The loss of your worldview implies the loss of any rational basis for inferring the consequences of your actions. It seems to me that you can ask, as a question about "you" the physical system "what would you do if you irrationally believed X", but not, as a question about rationality, "what would it be rational for you to do if you irrationally believed X"?

Incidentally, I decided upon consideration of what the math would actually look like for the type of utility function that I'd currently consider reasonable, that given a fixed population, disutility would be basically linear in number of people experiencing dust speck events (the other nonlinearities about one person experiencing a bunch of events would hold though) so am shifting my answer, tenatively, to TORTURE. (Just sticking this comment in this thread since I also made the other claim in this thread.)

Tuning one's preference function is a constrained optimization problem. What I want is a preference function simple enough for my very finite brain to be able to compute it in real time, and that does a good job (whatever exactly that means) on-some-kind-of-average over some plausible probability distribution of scenarios it's actually going to have to deal with.

Choosing between torturing one person for 50 years and giving 3^^^3 people minimally-disturbing dust specks is a long, long way outside the range of scenarios that have non-negligible probability of actually coming up. It's a long, long way outside the range of scenarios that my decision-theoretic intuition has been tuned on by a few million years of evolution and a few decades of experience.

My preference function returns values with (something a bit like) error bars on them. In this case, the error bars are much larger than the values: there's much more noise than signal. That's a defect, no doubt about it: a perfect preference function would never do that. A perfect preference function is probably also unattainable, given the limitations of my brain.

What possible reason is there for supposing that my preference function would be improved, for the actual problems it actually gets used for, by nailing down its behaviour far outside the useful range?

If there were good reason to think that decision theory is like (a Platonist's view of) logic, with a Right Answer to every question and no limits to its validity, then there would be reason to expect that nailing down my preference function's values out in la-la-land would be useful. But is there? Not that I know of. Decision theory is an abstraction of actual human preferences. Applying it to problems like Eliezer's might be like extrapolating quantum mechanics down to a scale of 10^-(10^100) m.

Douglas, my own bias is to think that evolution has given us 3 billion+ years of selection for evolving faster. And multicellular organisms (a small minority of the total but interesting to us) have found ways to make genetic "modules" that result in phenotypes which fit together in a modular way. Chordates build a variety of structures from keratin. Arthropods build a big variety of limbs. Etc. The body plans that are most flexible speciate into the largest variety of niches -- nematodes, arthropods, mollusks, and chordates, and you have the big majority of animal species in just those 4 phyla.

We don't make just random changes, we have "hotspots" that change a lot while others are mostly held fixed. A big variety of mechanisms evolved that encourage faster evolution, because those mechanisms are themselves selected.

Apologies for the off-topic note.

Michael Vassar, yes! Thank you for putting it so clearly.

J Thomas-- What you say fits well with the neo-Darwin model of evolution. One example you might be interested in that clearly does not is the tuberculosis bacteria. Google 'tuberculosis strain w' for more info. It turns out this sort of thing happens more than was previously thought (of course it wasn't thought to happen at all until fairly recently) This is a case of motivated continuation on my part- the old model predicted a cure that turned out to be a recipe for making an incurable disease-- uh I want to understand better.

Douglas, I see nothing about strain w that's surprising. Would you like to suggest a blog and a thread to discuss this?

Douglas, I see nothing about strain w that's surprising. Would you like to suggest a blog and a thread to discuss this?

It's kind of past the point where this is really relevant, but I was interested to notice that lots of commenters launched into discussions of potential knock-on consequences of real-world speckification but not a single person queried the extended cost of a real-world 50-year torture option (infrastructure, training, torturer-trauma, wear and tear on electrodes etc.). Of course, as with any thought experiment dragging in any externalities at all was/is invalid: the experiment sets the parameters, and any speculation outside of these is irrelevant. But insofar as that was being done I thought it curious that these types of speculations all went one way.

If pressed, I'd hypothesize that this was because some people who saw that the 'specks' option was obviously the right choice were left feeling that there was a further trick of some kind: surely the obvious wrong answer, torture, must be right - or why would the thought experiment have been posed at all?

Personally, I'm a specks guy and I feel deeply suspiciuous of the torturers' reasoning: I suspect it of being dependent on a fallacious calculation of harm. But I think the thought experiment is of very limited value as it does not really mirror any real-world scenario that I can see.

J Thomas-- try www.wasdarwinwrong.com Best place for the info. because it presents the problems without demanding any particular solution. outeast- good point about the speculations, but thought experiments can be off-the-wall and still be of value because they are designed to help see the world in a different or new way. Sometimes the off-the-wall ones are best for that reason IMO.

I should begin by saying that I caught myself writing my conclusion as the first sentence of this post, and then doing the math. I'm doing the calculations entirely in terms of the victim's time, which is quantifiable.

Dust specks would take up a much smaller portion of the victims' lifes (say, a generous 9 seconds of blinking out of 2483583120 seconds of life expectancy (78.7 years) per person), whereas torture would take up a whole fifty years of a single person's life.

All of my math came crashing down when I realized that 3^^^3 is a bigger number than my brain can really handle. Scope insensitivity makes me want to choose the dust.

Would anyone really care about the dust, though? I mean, 9/2483583120 is a fairly small number, all things considered.

The law of large numbers says yes. If there is an infinitesimal chance of someone, say, getting into a lethal car accident because of a dust speck in their eye, then it will happen a whole bunch of times and people will die. If the dust could cause an infection and blind someone, it will happen a whole bunch of times. That would be worse than one persons torture.

But if the conditions are such that none of that will happen to the people--they are brought into a controlled environment at at a convenient time and given sterile dust specks (if you are capable of putting dust in so many people's eyes at will, then you are probably powerful enough to do anything)--then no individual person would really care about it. A dust fleck simply doesn't hurt as badly a torture. Every single person would just forget about it.

So, if you mean "a dust fleck's worth of discomfort", then I choose the dust. If you mean dust specks in people's eyes, then I choose the torture.

"World Development Indicators | Data." Data | The World Bank. The World Bank Group, 2011. Web. 23 Aug. 2011. http://data.worldbank.org/data-catalog/world-development-indicators?cid=GPD_WDI.

I think, based on everyone's level of discomfort with this problem, that if there were an experiment wherein people in one group were asked a question like this, but on a much smaller scale, say, "torture one person for an hour or put a speck of dust into the eyes of (3^^^3)/438300," or even one second of torture vs (3^^^3)/1577880000 (Obviously in decimal notation in the experiment) specks of dust, and in the second group, people were told the original question with the big numbers, people in the first group would choose the torture more often and much more quickly and confidently.

I say this because people are quite uncomfortable having to choose to torture someone for 50 years, even if it isn't necessarily as bad as the other option.

Hmm. I seem to be flinching away from both answers, and I think I know why. It's because I'm unable to decide whether utility really does multiply (after all, one could advocate the utility function "The minimum happiness within the population", instead of the sum).

So I'm happy to make the factual claims that "Sum-utility => pick 'torture'" and "Min-utility => pick 'specks'"; I just can't see any procedure for choosing between sum and min. So I'll formulate a test to see which I believe: I'll gradually reduce the severity of the non-speck option. So, specks versus someone getting tortured for 25 years, I'm still unsure. Specks versus someone getting slapped in the face, I choose slap over specks. Therefore I'm not following min-utility, so I'm willing to accept that really I'm following sum-utility, so in the original problem I pick Torture. I don't like this, because my brain wants to be scope-insensitive and refuses to understand 3^^^3, but when I made one of the outcomes not flinch-worthy that outcome got picked, and I'm pretty sure that my reasons for picking Slap ought to scale up to Torture, so there it is.

I started this post not knowing what answer I would reach despite having spent several minutes on the question. I think I've now been trying to resolve this for over half an hour, and I still feel uncomfortable. My mind has just now come up with a third alternative, which is that utilities should perhaps be rated with hyperreals, so that 3^^^3 1 is still less than 1 H (for an infinite hyperinteger H), in which case we could pick Specks without discarding sum-utility. But I probably wouldn't have thought of that while I was locked up and couldn't choose an answer. I am now feeling comfortable, which suggests that this is what I actually believe about utilities. Of course, now there is an experiment I could do to try and falsify this: try to construct a chain of things starting at a dust speck and ending at torture, where each link in the chain is only a finite amount worse than the one before. I know I can get from speck to slap, because I chose Slap over Specks. I also think a hefty kick up the arse is only finitely worse than a slap. I next try to get to a broken arm, but I'm unwilling to do that (at least, in a single step), so I need to find something intermediate. In fact I think I should try and find something I can jump down to from a broken arm, because a broken anything seems scary in a new way. A deep-bruised hand? Yes, I think that relates finitely to a broken arm. I also think that finitely many kicks up the arse are worse than a deep-bruised hand. Given that my instinctive feeling about the relation of kick to arm was very similar to my feeling about the relation of speck to torture, I conclude that in fact my scale of utilities is constrained to the finite.

The point to this post (if there is one) is that a useful method seems to be to vary the parameters of the problem until you can get an answer, and then look to see whether that illuminates the original problem. (Come to think of it, ISTR that's one of Polyà's How To Solve It tips.) But to evaluate this method, I need to see whether it can also produce the opposite result. So, I need to vary the parameters in ways that favour Specks. If I reduce the 3^^^3 to something smaller, I eventually pick Specks because I get a number I think I can comprehend - but that number has to be so much smaller than 3^^^3 that I don't think it's relevant to the original problem. If I make the 'torture' option involve something worse than torture, I still pick it - I can't think of anything that's sufficiently worse than torture that doing that to someone could make me pick Specks when I didn't pick Specks against torture.

So the method does constrain, and I pick Torture. There, finally finished this post.

[-][anonymous]13y40

Of course, you can also use the chain of negative-utility cases to make a direct argument for specks vs. torture.

Say you prefer 1 slap to N1 specks. Then you prefer 1 kick to N2 slaps, 1 bruise to N3 kicks, 1 broken arm to N4 bruises, and so on, up until the last step where you prefer years of torture to Nk of something.

It follows that the specks vs. torture point comes at N1 x N2 x N3 x .... x Nk. This is pretty much always going to be less than 3^^^3 -- if the steps were truly small, the factors are all going to be less than a trillion or so, and there's probably going to be less than a trillion steps, and (1 trillion)^(1 trillion) is still insignificant compared to 3^^^3.

Of course. Except that I think you mean trillion^trillion, not trillion*trillion.

[-][anonymous]13y10

Er. Right. Fixed. And it's a testament to the magnitude of 3^^^3 that I need to change absolutely nothing else.

I chose RANDOM* and feel that this

  • Satisfies the suggestion of making sure that you choose/'state a preference' (the result of RANDOM is acceptable to me and I would be willing to work past it and not dwelling on it).

  • Satisfies the suggestion of making sure you state assumptions to the extent you're able to resolve them (RANDOM implies a structure upon which RANDOM acts and I was already thinking about implications of either choice, though perhaps I could have thought more clearly about the consequences of RANDOM specifically)

  • does not compromise me as a (wannabe) rational person (ie I use the situation to update previous beliefs)

  • Not to allow the alternatives to distract afterwards (as once the choice RANDOM is made, it cannot be unmade -- future choices can be made RANDOM, TORTURE, SPECKS or otherwise)

  • Does not compromise future escape routes (RANDOM, SPECK, RANDOM, TORTURE is just as an acceptable sequence of choices to me as SPECK, TORTURE, SPECK, TORTURE -- it just depends what evidence and to what extent evidence has been entangled)

but has the additional benefit of

  • not biasing me towards my choice very much. If SPECKS or TORTURE is chosen, it is tempting to 'join team SPECKS'. I suppose I'll be tempted to join team RANDOM, but since RANDOM is a team that COOPERATEs with teams SPECKS and TORTURE something GOOD will come of that anyway.

  • Reserving my agency, and the perception of my agency for other decisions(though they may perhaps be less important(3^^^3 dust specks is a potentially VERY IMPORTANT!!!!!!!!!!!!!!!!!!!! decision), they will be mine), such as meta-decisions on future cases involving and not involving RANDOM)

in fact let's see if I can rephrase this post

META-TORTURE and META-SPECKS stances exist that disposition us away from TORTURE and SPECKS that are harder to express when making a decision or discussing decisions with people and that to avoid holding these stances that cannot be held to rational scrutiny by ourselves and others so well that we should avoid making them. That it is possible to get into a situation where we fail to resolve a Third Alternative where we must choose and that making the correct choice, as an altruist/rationalist/etc is important even in these cases. SPECKS or TORTURE seem to be the only choices, pick one.

I maintain however that RANDOM or DEFAULT will always be by the nature of what a choice is, always, logically, available.

*actually I chose DEFAULT/RANDOM but the more I think about it the more I think RANDOM is justified

does not compromise me as a (wannabe) rational person (ie I use the situation to update previous beliefs)

Your stated preferences aren't consistent with the VNM axioms.

It appears I'm less rational than I thought. I suppose another way to rephrase that would be that to draw the outline of VNM-rational decisions only up to preferences that are meaningfully resolvable(and TORTURE vs SPECK does not appear to be to me at least) with a heuristic of how to resolve them clearer given intereaction with unresolveable areas. I would still be making a choice, albeit one with the goal of expanding rational decisionmaking to the utmost possible(it would be rational to be as rational as permissable). That seems pretty cheap though, reeking of 'explaining everything'. Worse, one interpretation of this dilemma would be that you have to resolve your preferences and that 'middle' is excluded, in which case it is a hard problem to which case I can likely offer no further suggestion.

Did you not previously state that one should learn about the problem as much as one can before coming to a conclusion, lest one falls prey to the confirmation bias? Should one learn about the problem fully before making a decision only when one doesn't suspect to be biased?