Vladimir_Nesov comments on The Trolley Problem in popular culture: Torchwood Series 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.
Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.
There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn't we? That's what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can't figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I'd really like to preserve the baby. (See my other post above, in response to Nesov.)
Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it's inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn't "intrinsically utilitarian", and the second isn't "intrinsically emotional".
Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It's instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.
Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.
However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.
I'm not convinced utilitarian reasoning can always be applied to situations where two preferences come into conflict: Calculating "secondary" uncertain factors which could influence the value of each decision ruins the possibility of exactness. Even in the trolley problem, in all its simplicity, each decision has repercussions whose values have some uncertainty. Thus a decision doesn't always have a strict value, but a probable value distribution! We make a trolley decision by 1) Considering only so many iterations in trying to get a value distribution, and 2) seeing if there is a satisfying lack overlap between the two. When the two distributions overlap too much (and you know that they are approximate, due to the intractability of getting a perfect distribution), it's really a wild guess to say one decision is best.
Utilitarian calculation helps the process, by providing means of deciding when each value probability distribution is sharply enough defined, and whether the overlap meets your internal maximum overlap criteria (presuming that's sharply defined!), but no amount of reasoning can solve every moral dilemma a person might face.
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
I don't see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there's nothing irrational about deciding to go to war with the aliens.
You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.
Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.
You've lost me.
The analogy in the next paragraph was meant to clarify. Do you see the analogy?
A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can't solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.
I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?
jwdink, I don't think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional "impulses" or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you're saying that what feels valuable to you isn't valuable to you. This is a contradiction!
An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.
But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone's lives was a "less rational" value. If it's a value, it's a value... how do you call certain values invalid, or not "real" preferences?
I missed where Vladimir made that suggestion, though I'm sure others have. You can have an irrational value, if it's really a means and not an end (which is another value), but you don't recognize that, and call the means a value itself. Means to an end can of course be evaluated as rational. If anyone made the suggestion you mention, they probably presumed a single "basic" value of preserving lives, and considered the method of deciding to be a means, but denoted as a value.
(Of course, a value can be both a means and an end, which presents fun new complications...)
The problem is a confusion. Human preference is something implemented in the very real human brain.
That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?
Preference of a given human is defined by their brain, and can be somewhat different from person to person, but not too much. There is nothing "objective" about this preference, but for each person there is one true preference that is their own, and same could be said for humanity as a whole, with the whole planet defining its preference, instead of just one brain. The focus on the brain isn't very accurate though, since environment plays its part as well.
I can't do justice to the centuries-old problem with a few words, but the idea is more or less this. Whatever the concept of "preference" means, when the human philosophers talk about it, their words are caused by something in the world: "preference" must be either a mechanism in their brain, a name of their confusion, or something else. It's not epiphenomenal. Searching for the "ought" in the world outside human minds is more or less a guaranteed failure, especially if the answer is expected to be found explicitly, as an exemplar of perfection rather than evidence about what perfection is, to be interpreted in nontrivial way. The history of failure to find an answer while looking in the wrong place doesn't prove that the answer is nowhere to be found, that there is now positive knowledge about the absence of the answer is the world.