Lumifer comments on A Dialogue On Doublethink - Less Wrong

52 Post author: BrienneYudkowsky 11 May 2014 07:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 09 May 2014 12:56:09AM -1 points [-]

If you sanction even one tiny exception, you lose the benefits of purity

What is that "purity" you're talking about? I didn't realize humans could achieve epistemic perfection.

Comment author: So8res 09 May 2014 01:12:23AM 10 points [-]

Keep in mind here that I'm steelmanning someone else's argument, perhaps improperly. I don't want to put words in anyone else's mouth. That said, I used the term 'purity' in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Continuing the steelmanning, however, I'd say that while no human can achieve epistemic perfection, there's a large class of epistemic failures that you only recognize if you're striving for perfection. Striving for purity, not purity itself, is what gets you the gains.

Comment author: BrienneYudkowsky 09 May 2014 02:34:10AM 9 points [-]

So8ers, you're completely accurate in your interpretation of my argument. I'm going to read some more of your previous posts before responding much to your first comment here.

Comment author: Eugine_Nier 09 May 2014 01:38:07AM 6 points [-]

Yes, as Eliezer put it somewhat dramatically here:

If you once tell a lie, the truth is ever after your enemy.

To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.

Comment author: fezziwig 09 May 2014 07:41:33PM 3 points [-]

You've drawn an important distinction, between believing a lie and telling one. Right now we're talking about lying to ourselves so the difference isn't very great, but be very careful with that quote in general.

Comment author: fezziwig 09 May 2014 07:47:45PM 2 points [-]

You've drawn an important distinction, between believing a lie and telling one. Your formulation is correct, but Eliezer's is wrong.

Comment author: Eugine_Nier 12 May 2014 02:02:47AM 7 points [-]

Telling a lie has it's own problems, as I discuss here.

Comment author: fezziwig 12 May 2014 08:08:10PM 1 point [-]

Yes, it's pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that's the point of lying, after all. But right now we're talking about the harm one does to oneself by lying; I submit that there needn't be any.

Comment author: Eugine_Nier 12 May 2014 09:57:44PM 1 point [-]

Did you even read the comment I linked to? It's whole point was about the harm you do to yourself and your cause by lying.

Comment author: [deleted] 13 May 2014 01:15:48AM *  0 points [-]

I think you and fezziwig aren't disagreeing. You're saying as an empirical matter that lying can (and maybe often does) harm the liar. He's just saying that it doesn't necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.

Comment author: Armok_GoB 14 May 2014 12:15:17AM *  1 point [-]

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

Comment author: BrienneYudkowsky 09 May 2014 10:25:49PM 3 points [-]

I can already predict, though, that much or my response will include material from here and here.

Comment author: Lumifer 09 May 2014 01:35:34AM 0 points [-]

in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Could you give some examples?

there's a large class of epistemic failures that you only recognize if you're striving for perfection.

I am not sure which class you're talking about... again, can you provide some examples?