Posts

Sorted by New

Wiki Contributions

Comments

What I'm wondering, in other words is this: Is our reluctance to carry out an act that we may have judged to be morally justifiable a symptom that the decision-making software we think we're running is not the software we're actually running?

Doesn't the use of the word 'how' in the question "If "yes", how inherently right would it have to be, for how many babies?" presuppose that the person answering the question believes that the 'inherent rightness' of an act is measurable on some kind of graduated scale? If that's the case, wouldn't assigning a particular 'inherent rightness' to an act be, by definition, the result of a several calculations?

What I mean is, if you've 'finished' calculating, and have determined that killing the babies is a morally justifiable (and/or necessary) act, and there is a residual unwillingness in your psyche to actually perform the act, isn't that just a sign that you haven't finished your calculations yet, and that what you thought of as your moral decision-making framework is in fact incomplete?

Then we'd be talking about the interaction of two competing moral frameworks...but from a larger perspective, the framework you used to calculate the original 'inherent rightness' of the act is a complicated process that could arguably be broken down conceptually into competing sub-frameworks.

So maybe what we're actually dealing with, as we ponder this conundrum, is the issue of 'how do we detect when we've finished running our moral decision-making software?'