siodine comments on Undiscriminating Skepticism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1329)
I think it shows someone is trying to "solve" a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you're free to argue without being opaque. E.g., "okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don't think this experiment does that because blah blah blah."
Moreover, moral reasoning is hypothetical if you're sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say 'actions and evaluations' or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions--what you'll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you're fundamentally saying moral reasoning has consequences. I agree with that, of course. I don't think it disguishes it from theorical reasoning.
By 'action' I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don't expect them to act as a result of their reasoning (since there's no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it's that you want to say hypotheticals produce different results than reality). But to answer your question, I don't think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I've had this experience). I think it also potentially affects the interlocutor or audience.
I've already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can't be moral reasoning. I'm just trying to think through the problem myself, I don't have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I'm happy to pursue that.
Right, but I'm just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I'm just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: "It is right to X, and possible for me to X, therefore..." and then comes the action. When someone is addressing a trolly problem, they might think to themselves: "If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so..." and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that "...given the circumstances, one should do X."
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we're doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form "That's E1, and I've previously decided that in case of E1 I should perform A1, so I'm going to perform A1."
If I've understood you correctly, the only question being discussed here is whether the label "moral reasoning" properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
You've understood me perfectly, and that's an excellent way of putting things. I think there's an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you'll A1. Now suppose E1 has occured, but suppose also that you've forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you've forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don't know why you're acting when you act, you're not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. "you're a good person", "You're a hero" etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it's true that we would consider an action in which you've forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don't consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
In case of a possible misunderstanding: I didn't mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren't arguing about what 'reasoning' is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn't generalize? If so, let me show you how that might work.
And this could go on and on until you've recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It's also a nice compact form for discussing such issues.
We're not, and I understand. We do disagree on that claim: I'm suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it's not really moral reasoning. I'm thinking of moral reasoning as the kind of reasoning you're morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
This is a good framing, thanks. By 'on and on' I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn't trust that any more than the intuition, right?