You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.
This seems to be endemic in the discussion section, as of late.
There's something cruel about ending a post with a request for people to point out errors in your reasoning and then arguing in circles with anyone who tries. Are you trolling, or do you just never admit to being wrong?
and this violates the principle of individual self-determination -
To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination. And if 3^^^3 people are possible, 3^^^^3 people are probably possible too, so the idea of fairness doesn't apply - these people have all been picked out to have their individual self-determination violated.
In general you seem to be trying to wriggle out of the hypothetical as stated by bringing in extra stuff and then deciding based only on that extra stuff.
the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.
This is a poor choice of terminology. Logarithmic functions grow slowly, but they're still unbounded: even if the badness of the dustspecks is a logarithmic function (say, the natural log) in the number of people specked, ln(3^^^3) is still so incomprehensibly large that the torture-favoring conclusion still follows. Perhaps what you mean is something more like logistic additi...
If you truly want to become stronger, note that several people whose intellects you respect have said that you're not processing their objections correctly. You really should consider the possibility that your mind is subconsciously shrinking away from a particular line of thought, which is notoriously difficult to see as it's happening, especially when perceived social status is at stake.
From my perspective, it looks like you're either rejecting consequentialism (which is a respectable philosophical position in most circles, but you don't admit outright t...
I do not believe...only...misses the point.
Am I reading that correctly?
I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices
There are multiple questions here, and they don't necessarily have similar answers.
Some examples:
A person who campaigns to ban torture and make it ille...
This whole discussion seems to hinge on the possibly misleading choice of the word "torture" in the original thought-experiment. Words can be wrong and one way is to sneak in connotations and misleading vividness — and I think that's what's going on here.
In our world, torture implies a torturer, but dust specks do not imply a sandman. "Torture" refers chiefly to great suffering inflicted on a victim intentionally by some person, as a continuous voluntary act on the torturer's part, and usually to serve some claimed social or moral purpo...
You have 50 years of a horrible torture and then 50*3^^^3 years of a pleasant life with no dust speck.
OR
50*(3^^^3+1) years of a pleasant life with a dust speck every 50 years.
What would you take?
I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point.
No, that was the point alright. If you don't believe me, ask Eliezer.
moral responsibility,
If it's not happiness, I don't find it intrinsically important. Also, if you do consider moral responsibility to be intrinsically important, you end up with a self-referential moral system. I don't think that would end well.
standards of behavior that either choice makes acceptable,
A socie...
Interesting coincidence. I was just yesterday thinking of terming the torture position "Omelasian."
As an aside, the ones who walk away are also moral failures from the pro-specks standpoint. They should be fomenting revolution, not shrugging and leaving.
Adding the issue of choice (i.e. moral responsibility) for the outcome seems to be fighting the hypo. Imagine Evil Omega forces you to choose between torture and dust-specks (by threatening to end humanity or something else totally unacceptable). You could respond that you are not morally competent to make the choice. This is true, but also irrelevant because it won't convince Evil Omega to let you go.
In short, the interesting question of the debate is "Which is worse: torture or dust specks?" At best, I think you've made an interesting case that "Should we switch the status quo from dust specks to torture (or vice versa)?" is a different question.
This is consistent. But it induces further difficulties in the standard utilitarian decision process.
To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.
First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?
Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus "torture") with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X' and Y', where X' = X with probability p and Y' = Y with probability p (B - ε) / (B + ε).
Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.
(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)
If it still doesn't sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can't fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience - which is nevertheless still not torture, nobody is tortured as far as you know - let's call this adjustment A). Consider now decisions about money. If W is one's total wealth, then u(W,A) must be convex with respect to W if it's value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).
(This isn't alleviated by a utility gap between torture and non-torture.)
Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one's decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like "if I buy this banana, would it increase the chance of people getting tortured?". I don't think you are striving to consistently apply this decision theory.
(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can't think of a modern one just this moment).
It seems like yo...
For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"
Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.
What I have never yet seen is something akin to the notion expressed in Ursula K LeGuin's The Ones Who Walk Away From Omelas.If you haven't read it, I won't spoil it for you.
I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point. There are consequences to such a choice that extend beyond the suffering inflicted; moral responsibility, standards of behavior that either choice makes acceptable, and so on. Any solution to the question which ignores these elements in making its decision might be useful in revealing one's views about the nature of cumulative suffering, but beyond that are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.
While I myself tend towards the 'logarithmic' than the 'linear' additive view of suffering, even if I stipulate the linear additive view, I still cannot agree with the conclusion of torture over the dust speck, for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices, and this violates the principle of individual self-determination -- a principle I have seen Less Wrong's community spend a great deal of time trying to consider how to incorporate into Friendliness solutions for AGI. We as a society already implement something similar to this, economically: we accept taxing everyone, even according to a graduated scheme. What we do not accept is enslaving 20% of the population to provide for the needs of the State.
If there is a flaw in my reasoning here, please enlighten me.