JGWeissman comments on Open Thread: May 2009 - Less Wrong

4 Post author: steven0461 01 May 2009 04:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (204)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 11 May 2009 09:57:32PM 1 point [-]

Suppose the king has 10 people prepared to be hung. They are in the gallows with nooses around their neck, standing on a trap door. The king shows you a lever that will open the trap door, and kill the 10 victims. The king informs you that if you do not pull the lever within one hour, the 10 people will be freed and you will be executed.

Here the king has set up the situation, but you will be the last sentient being capable of moral reasoning in the causal chain that kills 10 people. Is your conclusion different in this scenario?

Comment author: mattnewport 11 May 2009 10:23:52PM 0 points [-]

The king here is more diabolical and the scenario you describe is more traumatic. I believe it does change the intuitive moral response to the scenario. I don't believe it changes my conclusion of the morality of the act. I feel that I'd still direct my moral outrage at the king and absolve the 11th man of moral responsibility.

This is where these kinds of artificial moral thought experiments start to break down though. In real situations analogous to this I believe the uncertainty in the outcomes of various actions (together with other unspecified details of the situation) would overwhelm the 'pure' decision made on the basis of the thought experiment. I'm unconvinced of the value of such intuition pumps in enhancing understanding of a problem.

Comment author: conchis 11 May 2009 11:22:20PM *  1 point [-]

Why is this where the thought experiments suddenly start to break down? Sure, it's a less convenient world for you, but I don't see why it's any more artificial than the original problem, and you didn't seem to take issue with that.

Comment author: mattnewport 11 May 2009 11:58:24PM 0 points [-]

I have taken issue with the use of thought experiments generally in previous comments, partly because it seems to me that they start to break down rapidly when pushed further into 'least convenient world' territory. I'm skeptical in general of the value of thought experiments in revealing philosophical truths of any kind, ethical or otherwise. They are often designed by construction to trigger intuitive judgements based on scenarios so far from actual experience that those judgements are rendered highly untrustworthy.

I answered the original question to say that yes, I did agree that the 11th man was not acting immorally here. I suspect this particular thought experiment is constructed as an intuition pump to generate the opposite conclusion and to the extent that the first commenter is correct that the view that the 11th man has done nothing immoral is a minority position it would seem it serves its purpose.

I've attempted to explain why I think the intuition that this is morally questionable is generated and why I think it's not to be fully trusted. I don't intend to endorse the use of such thought experiments as a good method for examining moral questions though.

Comment author: conchis 12 May 2009 11:44:48AM *  1 point [-]

Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.

While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn't seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.

In order to answer that question, it seems like we might need a better theory of when they're especially likely to be poor guides than we currently have. It's not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*

As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.

Finally, it's worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).

* And even if information content did increase with realism, realism doesn't seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one's favourite theory as much as it is a function of the postulated scenario.

Comment author: MrHen 12 May 2009 01:48:58PM 1 point [-]

I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?

Comment author: conchis 12 May 2009 05:05:34PM 0 points [-]

I don't, but I'd second the call for any good suggestions.

Comment author: mattnewport 12 May 2009 08:52:33PM 0 points [-]

Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.

I don't consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of 'good') and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.

There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though - there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.

I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational 'good', rather than the intuitions that produce the conflict.

Comment author: conchis 12 May 2009 09:39:58PM *  0 points [-]

I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man's not-sacrificing is morally questionable.

In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I'm sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you'll understand if I say this appears a little contradictory at the moment.

P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I'll actually follow through on this, but I'm seriously tempted to automatically vote down anyone who uses it from now on, just because it's becoming so overused around here.

P.P.S. On re-reading, the word "simple" in the sentence you quoted was utterly redundant. It shouldn't have been there. Apologies for any confusion that may have caused.

Comment author: mattnewport 12 May 2009 10:49:18PM 0 points [-]

I made a few claims in my original post: i) I don't think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man's moral failing to be forgivable. I don't have hard evidence for any claim except i), they are just my impressions.

The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.

Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that's a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.

If you have a better alternative to the baseball/differential equations example I'd happily use it. It's the first example that sprang to mind, probably due to it's being commonly used here.

Comment author: conchis 13 May 2009 10:24:22AM *  0 points [-]

Your argument seems to me to conflate judgments that "X-ing is wrong" with predictions that one would not X if faced with a particular choice in real life.

If I say "X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I'm selfish/weak" (which is what I and others have said elsewhere) then (a) there's no conflict to resolve; and (b) it doesn't make much sense to claim that my judgment that "X is wrong" is a cheap signal of altruism. In fact I've just signaled the opposite.

Now, if people changing their moral judgments from "X-ing is wrong" to "X-ing is permissible", then I agree that there's a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.

FWIW, If you want a self-interested explanation of the stated judgment that "X-ing is wrong", I wonder whether moral censure (i.e. trying to convince others that they shouldn't X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.

Comment author: mattnewport 13 May 2009 10:39:32PM 0 points [-]

Your argument seems to me to conflate judgments that "X-ing is wrong" with predictions that one would not X if faced with a particular choice in real life.

Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.

Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn't mean it can't be improved, just that its very existence implies that it works reasonably well.

The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we'd expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It's like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I'd argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion - an idealized morality that is not achievable with actual people is not very interesting to me.

Ultimately I don't take a utilitarian view of morality - that what is 'good' is what maximizes utility across all agents. I take an 'enlightened self interest' view - that what is 'good' is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.

Comment author: dclayh 12 May 2009 01:43:39AM 0 points [-]

They are often designed by construction to trigger intuitive judgments based on scenarios so far from actual experience that those judgments are rendered highly untrustworthy.

Agreed; however it's important to distinguish between this sort of appeal-to-intuition and the more rigorous sort of thought experiment that appeals to reasoning (e.g. Einstein's famous Gedankenexperimente).

Comment author: JGWeissman 11 May 2009 10:56:48PM 1 point [-]

I don't believe it changes my conclusion of the morality of the act.

Given that your defense of the morality was based on the inaction of not self sacrificing, and that in this scenario inaction means self sacrifice and you have to actively kill the other 10 people to avoid it, what reasoning supports keeping the same conclusion?

Comment author: mattnewport 11 May 2009 11:17:33PM 0 points [-]

I'm comparing the inaction to the not-self-sacrificing, not to the lack of action. I attempted to clarify the distinction when I said the similarity was not 'anything specific about the way the question is phrased'.

The similarity is not about the causality but about the cost paid. In many 'morality of inaction' problems the cost to self is usually so low as to be neglected but in fact all actions carry a cost. I see the problem not as primarily one of determining causality but more as a cost-benefit analysis. Inaction is usually the 'zero-cost' option, action carries a cost (which may be very small, like pressing a button, or extremely large, like jumping in front of a moving trolley). The benefit is conferred directly on other parties and indirectly on yourself according to what value you place on the welfare of others (and possibly according to other criteria).

I think our moral intuition is primed to distinguish between freely chosen actions taken to benefit ourselves that ignore fairly direct negative consequences on others (which we generally view as morally wrong) and refraining from taking actions that would harm ourselves but would fairly directly benefit others (which may or may not be viewed as morally wrong but are generally seen as 'less wrong' than the former). We also seem primed to associate direct action with agency and free choice (since that is usually what it represents) and so directly taken actions tend to lead to events being viewed as the former rather than the latter.

I believe the moral 'dilemma' represented by carefully constructed thought experiments like this represents a conflict between our 'agency recognizing' intuition that attempts to distinguish directly taken action from inaction and our judgement of sins of commission vs. omission. Given that the unusual part of the dilemma is the forced choice imposed by a third party (the evil king) it seems likely that the moral intuition that is primed to react to agency is more likely to be making flawed judgements.

Comment author: conchis 12 May 2009 12:00:56PM 0 points [-]

I see the problem not as primarily one of determining causality but more as a cost-benefit analysis.

This makes sense to me, but it seems to run counter to the nature of MrHen's original claim that the issue is lack of responsibility. For example, if it's all about CBA, then you would presumably be more uneasy about MrHen's hostage example ($100 vs. 10 lives) than he seems to be. Presumably also you would become even more uneasy were it $10, or $1, whereas MrHen's argument seems to suggest that all of this is irrelevant because you're not responsible either way.

Am I understanding you correctly?

Comment author: mattnewport 12 May 2009 08:31:09PM 0 points [-]

In this example I wouldn't hold someone morally responsible for the murders if they failed to pay $100 ransom - that responsibility still lies firmly with the person taking the hostages. Depending on the circumstances I would probably consider it morally questionable to fail to pay such a low cost for such a high benefit to others though. That's a little different to the question of moral responsibility for the deaths however.

Note that I also don't consider an example like this morally equivalent to not donating $100 to a charity that is expected to save 10 lives as a utilitarian/consequentialist view of morality would tend to hold.

Comment author: MrHen 12 May 2009 01:43:13PM 0 points [-]

Well, you are certainly understanding me correctly.