Imagine that you are being asked a question; a moral question involving an imaginary world. From the prior experience with people, you have learnt that people behave in a certain way; people are, for the most part, applied thinkers and whatever is your answer, it will become a cached thought that will be applied in the real world, should the situation arise. The whole rationale behind thinking of imaginary worlds may be to create cached thoughts.
Your answer probably won't stay segregated in the well defined imaginary world for any longer than it takes the person who asked the question to switch the topic; it is the real world consequences you should be most concerned about.
Given this, would it not be rational to perhaps miss the point but answer that sort of question in the real world way?
To give a specific example, consider this question from The Least Convenient Possible World :
You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?
First of all, note that the question is not abstract "If [you are absolutely certain that] the only way to save 10 innocent people is to kill 1 innocent person, is it moral to kill?" . There's a lot of details. We are even told that this 1 is a traveller, I am not exactly sure why but I would think that it references kin selection related instincts; the traveller has lower utility to the village than a resident.
In light of how people process answers to such detailed questions, and how the answers are incorporated into the thought patterns - which might end up used in the real world - is it not in fact most rational not to address that kind of question exactly as specified, but to point out that one of the patients could be taken apart for the best of other 9 ? And to point out the poor quality of life and life expectancy of the surviving patients?
Indeed, as a solution one could gather all the patients and let them discuss how they solve the problem; perhaps one will decide to be terminated, perhaps they will decide to draw straws, perhaps those with the worst prognosis will draw the straws. If they're comatose one could have a panel of 12 peers make the decision. There could easily be trillions of possible solutions to this not-so-abstract problem, and the trillions is not a figure of speech here. Privileging one solution is similar to privileging a hypothesis.
In this example, the utility of any villager can be higher to the doctor than of the traveller who will never return, and hence the doctor would opt to take apart the traveller for the spare parts, instead of picking one of the patients based on some cost-benefit metric and sacrificing that patient for the best of the others. The choice we're asked about turn out to be just one of the options, chosen selfishly; it is deep selfishness of the doctor that makes him realize that killing the traveller may be justified, but not realize the same about one of the patients, for the selfishness did bias his thought towards exploring one line of reasoning but not the other.
Of course one can say that I missed the point, and one can employ backward reasoning and tweak the example by stating that those people are aliens, and the traveller is totally histocompatible with each patient, but none of the patients are compatible with each other (that's how alien immune systems work: there are some rare mutant aliens whose tissues are not at all rejected by any other).
But to do so would be to completely lose the point of why we should expend mental effort to search for alternative solutions. Yes it is defensive thinking - what does it defend us from though? In this case it defends us from making a decision based on incomplete reasoning or a faulty model. All real world decisions are, too, made in imaginary worlds - in what we imagine the world to be.
Morality requires a sort of 'due process'; the good faith reasoning effort to find the best solution rather than the first solution that the selfish subroutines conveniently present for consideration; to explore the models for faults; to try and think outside the highly abbreviated version of the real world one might initially construct when considering the circumstances.
The imaginary world situation here is just an example; and so is the answer an example of reasoning that should be applied to such situations - the reasoning that strives to explore the solution space and test the model for accuracy.
Something else which is tangential to the main point of this article. If I had 10 differently broken cars and 1 working one, I wouldn't even think of taking apart the working one for spare parts, I'd take apart one of the broken ones for spare parts. Same would apply to e.g. having 11 children, 1 healthy, 10 in need of replacement of different organs. The option that one would be thinking of is to take the one that's least likely to survive, sacrifice for other 9; no one in their mind would even think of taking apart the healthy one unless there's very compelling prior reasons. This seem to be something that we would only consider for any time for a stranger. There may be hidden kin selection based cognitive biases that affect our moral reasoning.
edit: I don't know if it is OK to be editing published articles but I'm a bit of obsessively compulsive perfectionist and I plan on improving it for publishing it in lesswrong (edit: i mean not lesswrong discussion), so I am going to take liberty at improving some of the points but perhaps also removing the duplicate argumentation and cutting down the verbosity.
I share your irritation with the Chinese room experiment. I don't share the same objection with the discussed hospital scenario, the level of non-realism is much lower in the latter. The Chinese room tacitly assumes all involved agents are normal people (so that our intuitions about knowing and understanding hold) while also assumes the man in the room's ability to learn a vast algorithm which we have even been unable to develop as a computer program yet. In the latter case, the non-realism is of sort "this doesn't usually happen".
Consider the scenario put in this form:
You have this dialogue with your doctor:
doctor: "I've had nightmares recently because what I've done. I feel I can't keep it for myself anymore and it can easily be you whom I tell my secret, if you don't object, of course."
you: "Go on."
"Well, I have killed a man. I have done it to save others, but still I suspect I might have done something very wrong. There were ten patients in the hospital, all in need of organ transplants. Each needed a different organ, and each of them was in a serious danger of dying if a donor doesn't appear quickly. Then a stranger wandered in. He wanted a routine checkup, but from the blood test I realised that, accidentally, he would probably be an ideal donor for all ten patients we had in the hospital. You know, we don't receive many donors in our hospital and we had little time. Almost certainly, this was the last chance to save those people."
"But, you couldn't be sure that the transplants would be successful, just based on a simple blood test."
"Of course, when I got the idea, I told the man that I need to do more tests to rule out my suspicion of a serious disease. I also asked him questions about his personal life to find out whether he had children or family who would regret his death. It turned out to not be the case."
"Yes, but even with an ideal donor, the quality and lenght of life of the transplantees are usually poor."
"Actually, according to my statistics half of the patients will survive twenty years with modest inconveniences. That's five people. One or two of the remaining five are going to die in a couple of years, but still, I was buying twenty years of life for five patients who would die in few weeks otherwise. The stranger was in his fifties so he could live for thirty years more."
"But, wasn't there another solution? You could kill one of the patients and use his organs, for example."
"Do you think this didn't occur to me? It couldn't be done. The patients were closely monitored and their families would sue the hospital under the slightest suspicion. If the truth comes out, the hospital will certainly lose the trust of the public, and perhaps be closed, causing many unnecessary deaths in the future. I was able to kill the stranger and arrange it as a traffic accident with head injury. I haven't been able to do that with the patients, or make up an alternative plan to secretly kill any of them."
"But there have to be thousands of alternative solutions. Literally."
"Maybe there were. I had thought about it for several days and no alternative solution had occured to me. After few days, the stranger insisted he couldn't stay longer. At that point I hadn't any alternative solution available, I was choosing between basically two alternatives. I chose to kill."
So, what are you going to do? Will you call police? Will you morally blame the doctor? In this setting you can't call for alternative solutions. The scenario is not probable, of course, but there are no blatantly absurd assumptions which would allow you to discard it as completely implausible, as in the Chinese room case.
Well, look at how you had to arrive at this example. There had to have been an iteration with a traveller, and the example had to be adjusted to make it so that this traveller is an ideal donor for 10 people, none of whom is a good donor for remaining 9. We're down to probabilities easily below 10^-10 meaning 'not expected to ever have happened in the history of medicine'. (Whereas the number of worldwide cases when someone got killed for organs is easily in the tens thousands) Human immune system does not work so conveniently for your argument, so you'll ... (read more)