CronoDAS comments on Undiscriminating Skepticism - Less Wrong

97 Post author: Eliezer_Yudkowsky 14 March 2010 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1329)

You are viewing a single comment's thread. Show more comments above.

Comment author: DonGeddis 16 March 2010 10:06:06PM 27 points [-]

Proposed litmus test: infanticide.

General cultural norms label this practice as horrific, and most people's gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you've used atheism to eliminate a soul, and humans are "just" meat machines, and abortion is an ok if perhaps regrettable practice ... well, scientifically, there just isn't all that much difference between a fetus a couple months before birth, and an infant a couple of months after.

This doesn't argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens). Don't unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.

Get one of your friends who claims to be a rationalist. See if they can argue passionately in favor of infanticide.

Comment author: CronoDAS 16 March 2010 10:17:33PM 3 points [-]

If I agreed with this logic, should I be reluctant to admit it here?

Comment author: byrnema 16 March 2010 11:04:02PM 2 points [-]

Agreeing with the logic is OK, but the problem with reductionism is that if you draw no lines, you'll eventually find that there's no difference between anything.

Thus the basic reductionist/humanist conflict: how does one you escape the 'logic' and draw a line?

Comment author: pengvado 16 March 2010 11:46:48PM 10 points [-]

Draw a gradient rather than a line. You don't need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.

See also The Fallacy of Gray. Now you're obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.

Comment author: byrnema 17 March 2010 01:38:31AM *  7 points [-]

Sure. But I was referring to my worry that if you don't allow your values to be arbitrary (e.g., I don't care about protecting fetuses but I care about protecting babies), you may find you wouldn't have any. I guess I'm imagining a story in which a logician tries to argue me down a slippery slope of moral nihilism; there'll be no step I can point to that I shouldn't have taken, but I'll find I stepped too far. When I retreat uphill to where I feel more comfortable, can I expect to have a logical justification?

Comment author: pengvado 17 March 2010 03:52:19AM 16 points [-]

I'm not sure what "arbitrary" means here. You don't seem to be using it in the sense that all preferences are arbitary.

a story in which a logician tries to argue me down a slippery slope of moral nihilism

If the nihilist makes a sufficiently circuitous argument, they can ensure that there's no step you can point to that's very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you'll find that they've approximated away any correlation with the premises. You don't need to avoid following the argument too far, if you appropriately increase your error bars at each step.

In short: "similar" is not a transitive relation.

Comment author: byrnema 18 March 2010 06:09:52PM 4 points [-]

From your answer, I guess that you do think we have 'justifications' for our moral preferences. I'm not sure. It seems to me that on the one hand, we accept that our preferences are arational, but then we don't really assimilate this. (If our preferences are arational, they won't have logical justifications.)

Comment author: gregconen 18 March 2010 06:42:56PM *  4 points [-]

I'm not sure what "arbitrary" means here. You don't seem to be using it in the sense that all preferences are arbitary.

That seemed to be exactly how he's using it. It would be how I'd respond, had I not worked it through already. But there is a difference between arbitrary in: "the difference between an 8.5 month fetus and a 15 day infant is arbitrary" and "the decision that killing people is wrong is arbitrary".

Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.

There's a lot more about this in the whole sequence on metaethics.

Comment author: byrnema 18 March 2010 07:35:44PM *  6 points [-]

I am generally confused by the metaethics sequence, which is why I didn't correct Pengvado.

at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.

Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?

So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, 'My preferences aren't logical. They evolved.'

If there's a difference in two positions in the moral landscape, we needn't justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don't consider logical consistency to be the most important moral principle.

Comment author: gregconen 18 March 2010 08:03:39PM 3 points [-]

But since our preferences are given to us, broadly, by evolution, shouldn't we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?

Yes.

I have a strong preferences for simple set of moral preferences, with minimal inconsistency.

I admit that the idea of holding "killing babies is wrong" as a separate principle from "killing humans is wrong", or holding that "babies are human" as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.

Comment author: simplicio 17 March 2010 04:30:44AM 3 points [-]

Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you'll find that they've approximated away any correlation with the premises. You don't need to avoid following the argument too far, if you appropriately increase your error bars at each step.

In short: "similar" is not a transitive relation.

This was rather elegantly put.