jwdink comments on The Trolley Problem in popular culture: Torchwood Series 3 - Less Wrong

16 Post author: botogol 27 July 2009 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: jwdink 29 July 2009 04:38:51PM *  2 points [-]

I don't quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?

I don't think the notion of dignity is completely meaningless. After all, we don't just want the maximum number of people to be happy, we also want people to get what they deserve-- in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense of the other 90%'s misery, or the reversed scenario?

I'm just seeing something parallel here: it's not brute number of people living that matters, so much as those people having worthwhile existences. After sacrificing their children on a gamble, do these people really deserve the peace they get?

(Would you also assert that Ozymandias' decision in The Watchmen was morally good?)

Comment author: eirenicon 29 July 2009 06:07:12PM 1 point [-]

What do the space monsters deserve? If you factor in their happiness, it's an even more complicated problem. The space monsters need n human children to be happy. If you give them up, you have happy space monsters and (6 billion - n) happy (if not immediately, in the long term) humans. If you refuse, assuming the space monsters are unbeatable, you have happy space monsters and zero happy humans. The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?

Comment author: jwdink 29 July 2009 07:13:08PM 0 points [-]

What do the space monsters deserve?

Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.

The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?

There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn't we? That's what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can't figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I'd really like to preserve the baby. (See my other post above, in response to Nesov.)

Comment author: Vladimir_Nesov 29 July 2009 07:39:56PM 0 points [-]

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it's inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn't "intrinsically utilitarian", and the second isn't "intrinsically emotional".

Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It's instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.

Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

Comment author: [deleted] 31 July 2009 03:35:05AM 0 points [-]

I'm not convinced utilitarian reasoning can always be applied to situations where two preferences come into conflict: Calculating "secondary" uncertain factors which could influence the value of each decision ruins the possibility of exactness. Even in the trolley problem, in all its simplicity, each decision has repercussions whose values have some uncertainty. Thus a decision doesn't always have a strict value, but a probable value distribution! We make a trolley decision by 1) Considering only so many iterations in trying to get a value distribution, and 2) seeing if there is a satisfying lack overlap between the two. When the two distributions overlap too much (and you know that they are approximate, due to the intractability of getting a perfect distribution), it's really a wild guess to say one decision is best.

Utilitarian calculation helps the process, by providing means of deciding when each value probability distribution is sharply enough defined, and whether the overlap meets your internal maximum overlap criteria (presuming that's sharply defined!), but no amount of reasoning can solve every moral dilemma a person might face.

Comment author: jwdink 29 July 2009 08:18:19PM 0 points [-]

Which of the decision is (actually) the better one depends on the preferences of one who decides

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

I don't see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there's nothing irrational about deciding to go to war with the aliens.

Comment author: Vladimir_Nesov 29 July 2009 08:38:01PM *  0 points [-]

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.

Comment author: jwdink 29 July 2009 09:13:34PM 0 points [-]

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

You've lost me.

Comment author: Vladimir_Nesov 29 July 2009 09:18:53PM *  0 points [-]

The analogy in the next paragraph was meant to clarify. Do you see the analogy?

A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can't solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.

Comment author: jwdink 29 July 2009 10:25:07PM 0 points [-]

I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?

Comment author: [deleted] 31 July 2009 03:12:58AM *  1 point [-]

jwdink, I don't think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional "impulses" or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you're saying that what feels valuable to you isn't valuable to you. This is a contradiction!

An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.

Comment author: Vladimir_Nesov 29 July 2009 10:35:03PM 1 point [-]

The problem is a confusion. Human preference is something implemented in the very real human brain.

Comment author: eirenicon 29 July 2009 08:02:58PM *  0 points [-]

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

If a decision decreases [personal] utility, is it not irrational?

Some people would say that it is dishonourable to hand over your wallet to a crackhead with a knife. When I was actually in that situation, though (hint: not as the crackhead), I didn't think about my dignity. I just thought that refusing would be the dumbest, least rational possible decision. The only time I've ever been in a fight is when I couldn't run away. If behaving honourably is rational then being rational is a good way to get killed. I'm not saying that being rational always leads to morally satisfactory decisions. I am saying that sometimes you have to choose moral satisfaction over rationality... or the reverse.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?

Comment author: jwdink 29 July 2009 08:12:57PM *  0 points [-]

If a decision decreases utility, is it not irrational?

I don't see how you could go about proving this.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?

Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don't the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?

I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn't mean that the decision to sacrifice those children was the morally good decision-- in the same way that, despite the tornado-free city being a happier city, it doesn't mean the tornado's diversion was a morally good thing.

Comment author: eirenicon 29 July 2009 08:44:26PM 1 point [-]

I should have said "decreases personal utility." When I say rationality, I mean rationality. Decreasing personal utility is the opposite of "winning".

Comment author: jwdink 29 July 2009 09:19:01PM 0 points [-]

Instrumental rationality: achieving your values.  Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

Comment author: Vladimir_Nesov 29 July 2009 09:27:48PM 0 points [-]

Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.

Comment author: jwdink 29 July 2009 10:25:48PM 0 points [-]

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

Comment author: Vladimir_Nesov 29 July 2009 10:37:27PM *  0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Comment author: Vladimir_Nesov 29 July 2009 05:44:49PM 1 point [-]
Comment author: jwdink 29 July 2009 07:05:28PM 1 point [-]

Yeah, the sentiment expressed in that post is usually my instinct too.

But then again, that's the problem: it's an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.

Comment author: Psy-Kosh 29 July 2009 07:47:52PM 1 point [-]

"shut up and multiply" is, in principle, a way to weigh various considerations like the value of autonomy, etc etc etc...

It's not "here's shut up and multiply" vs "some other value here", but "plug in your values + actual current situation including possible courses of action and compute"

Some of us are then saying "it is our moral position that human lives are so incredibly valuable that a measure of dignity for a few doesn't outweigh the massively greater suffering/etc that would result from the implied battle that would ensue from the 'battle of honor' route"

Comment author: jwdink 29 July 2009 08:08:13PM *  0 points [-]

Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.

No one has answered this challenge.

Comment author: Psy-Kosh 29 July 2009 05:37:00PM 0 points [-]

If you take an action that you know will result in a greater amount of death/suffering, just for the sake of your own personal dignity, do you actually deserve any dignity from that?

ie, one can rephrase the situation as "are you so selfish as to put your own personal dignity above many many human lives?" (note, I have not watched the Torchwood episodes in question, merely going at this based on the description here.)

IF fighting them or otherwise resisting is known to be futile and IF there's sufficient reason to suspect that they will keep their word on the matter, then the question becomes "just about everyone gets killed" vs "most survive, but some number of kids get taken to suffer, well, whatever the experience of being used as a drug is. (eventual death within a human lifespan? do they remain conscious long past that? etc etc etc...)"

That doesn't make the second option "good", but if the choices available amount to those two options, then we need to choose one.

"Everyone gets killed, but at least we get some 'warm fuzzies of dignity'" would actually seem to potentially be a highly immoral decision.

Having said that..... Don't give up searching for alternatives or ways to fight the monsters-in-question that doesn't result in automatic defeat. What's said above applies to the pathological dilemma in the least convenient possible world where we assume there really are no plausible alternatives.

Comment author: jwdink 29 July 2009 07:24:17PM 3 points [-]

Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn't sound obvious/ rationally derived/ etc.

I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:

then the question becomes "everyone preserves their objectively good dignity" vs. "just about everyone loses their dignity for destroying human autonomy, but we get that warm fuzzy feeling of saving some people." In this situation, "Everyone loses their dignity, but at least they get to survive--in the way that any other undignified organism (an amoeba) survives" would actually seem to be a highly immoral decision.

I'm not endorsing either view, necessarily. I'm just trying to figure out how you can claim one of these views is more rational or logical than the other.

Comment author: Psy-Kosh 29 July 2009 07:49:27PM 1 point [-]

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

ie, I am claiming "it seems to be that a consequence of my morality is that..."

Alternately "sure, maybe you value 'battle of honor' more than human lives, but then your values don't seem to count as something I'd call morality"

Comment author: jwdink 29 July 2009 08:13:51PM 0 points [-]

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

Okay. Would you say this statement is based on reason?