Eliezer_Yudkowsky comments on Emotional Basilisks - Less Wrong

-2 Post author: OrphanWilde 28 June 2013 09:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 28 June 2013 11:04:38PM 7 points [-]

Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?

EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.

Comment author: Kawoomba 29 June 2013 07:03:33AM *  4 points [-]

What about consequentialism? What if we'd get a benevolent AI as a reward?

We should never fight the hypothetical. If we get undesirable results in a hypothetical, that's important information regarding our decision algorithm. It's like getting a chance to be falsified and not wanting to face it. We could also just fight Parfit's Hitchhiker, Newcomb's. We shouldn't, and neither should we here.

Comment author: Dagon 29 June 2013 09:40:04AM 1 point [-]

Is there a difference between fighting the hypothetical and recognizing that the hypothetical is badly defined and needs so much unpacking that it's not worth the effort? This falls into the latter category IMO.

"Negative impact on happiness" is far too broad a concept, "theism" is a huge cluster of ideas, and the idea of harm/benefit on different individuals over different timescales has to be part of the decision. Separating these out enough to even know what the choice you're facing is will likely render the excercise pointless.

My gut feel is that if this were unpacked enough to be a scenario that's well-defined enough to really consider, the conundrum would dissolve (or rather, it would be as complicated as the real world but not teach us anything about reality).

Short, speculative, personal answer: there may be individual cases where short-term lies are beneficial to the target in addition to the liar, but they are very unlikely to exist on any subject that has wide-ranging long-term decision impact.

Comment author: ArisKatsaris 28 June 2013 11:39:13PM *  4 points [-]

Would you kill babies if it was intrinsically the right thing to do?

Probably not.

If not, under what other circumstances would you not do the right thing to do?

Obviously whenever the force of morality on my volition is overcome by the force of other non-moral preferences that go in an opposite direction. (a mere aesthetic preference against baby-killing might suffice, likewise not wanting to go to jail or executed)

Comment author: [deleted] 28 June 2013 11:24:42PM *  2 points [-]

Yes, none, any amount at all for any amount at all...assuming no akrasia, and as long as you don't mean 'right thing to do' in some kind of merely conventional sense. But that's just because, without quotation marks, the right thing to do is the formal object of a decision procedure.

If that's so, then your question is similar to this:

Would you infer that P if P were the consequent of a sound argument? If not, under what other circumstances would you not infer the consequent of a sound argument?

Comment author: kalium 01 July 2013 02:17:14PM 2 points [-]

If you accept the traditional assumptions of Christianity (well, the ones about "what will happen if I do X," not about "is X right?"), killing babies is pretty clearly the right thing. And still almost nobody does it, or has any desire to do it.

A just-baptized infant, as far as I know, is pretty much certain to go to Heaven in the end. Whereas if it has time to grow up it has a fair chance of dying in a state of mortal sin and going to Hell. By killing it young you are very likely saving it from approximately infinite suffering, at the price of sending yourself to Hell and making its parents sad. Since you can only go to Hell once, if you kill more than one or two babies then you're clearly increasing global utility, albeit at great cost to yourself. And yet Christians are not especially likely to kill babies.

Comment author: quentin 28 June 2013 11:28:21PM *  1 point [-]

I don't see how this relates to the original post, this strikes me as a response to a claim of objective/intrinsic morality rather than the issue of resolving emotional basilisks vis-a-vis the litany of tarsky. Are you just saying "it really depends"?

Comment author: DSherron 28 June 2013 11:44:48PM *  0 points [-]

This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment.

EDIT: That does make more sense, although I'd never seen that particular example used as "fighting the hypothetical", more just that "the right thing" is insufficiently defined for that sort of thing. Downvote revoked, but it's still not exactly on point to me. I also don't agree that you need to fight the hypothetical this time, other than to get rid of the particular example.

Comment author: TheAncientGeek 12 May 2015 05:14:02PM *  -1 points [-]

You've brought in moral realism, which isnt relevant.

"Would you do X, if it was epistemically rational, but not instrumentally rational"

"Would you do Y if it was instrumentally rational, but not epistemically rational"

If two concepts arent the same under all possible circumstances, they aren't the same concept. Hypotheticals are an appropriate way of determining that.