"What's the worst that can happen?" goes the optimistic saying. It's probably a bad question to ask anyone with a creative imagination. Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years. This is one of the worse things that can realistically happen to one person in today's world.
What's the least bad, bad thing that can happen? Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.
For our next ingredient, we need a large number. Let's use 3^^^3, written in Knuth's up-arrow notation:
- 3^3 = 27.
- 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
- 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))).
3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times. That's 3^^^3. It's the smallest simple inconceivably huge number I know.
Now here's the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
I think the answer is obvious. How about you?
Given: a paradoxical (to everybody except some moral philosophers) answer "TORTURE" appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn't want to lose them, even if sometimes they give "inefficient" answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent - even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn't make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn't wish to throw it away. So, maybe there's another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold - yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don't know and will never know about the situation - because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different - a large negative value from knowing about the tortured person's plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
Result: SPECKS!! Hurray! :)
OK. I think I understand you now. Thanks for clarifying.