torekp comments on Are Deontological Moral Judgments Rationalizations? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (168)
I can't speak to what is traditional and I don't mind declaring all historical utilitarians wrong in all their debates with non-utilitarians, though I wouldn't mind saying the opposite, either.
Human morality demands a certain amount of thought, and many actions demand moral consideration or their being "good" is no more than fate, and their being bad is negligence.
Upon thinking about it, one realizes that those who think about it should (shouldthosewhothinkaboutit) push the fat man. Those who don't think about it shouldn't (shouldn'tthosewhodon'tthinkaboutit) push the fat man, but should (shouldthosewhodon'tthinkaboutit) think about it.
To ask about unclarified "should" is as to ask about unclarified "sound".
It is important to bear in mind that blame is something humans spray paint onto the unalterable causality of the world, and not to think that either the paint is unalterable because causality is, or that causality is alterable because the paint is.
We can blame humans fully, partially, or not at all for the consequences when they are unthinking, they do what unthinking people should do, there are negative consequences, thinking people should have done a different thing, and those humans should have been thinking people but weren't.
Everything has been explained. There is nothing left in asking if a person really should have done what a thinking person should have done had he or she have been thinking, when the person should have been thinking, and unthinking people were not obligated to do thing.
One of us hasn't thought enough about it, because I think it takes more than thinking about it. One would also have to know oneself to be largely immune to various biases, which make most humans more prone to rationalize false conclusions about the need to kill someone for the greater good, than to correctly grasp a true utilitarian Trolley Problem. One would have to be human+, if not human(+N). (I think one would also have to live in a human+ or human(+N) community, but never mind about that.)
Note that Greene and other cognitive scientists rarely if ever spell out an airtight case, where the actions save either one life or five lives and magically have no further consequences, and where the utilitarian calculus is therefore clear. Greene simply describes the case more or less as Luke does above, and then leaves the subjects to infer or not infer whatever consequences they might.