You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on Emotional Basilisks - Less Wrong Discussion

-2 Post author: OrphanWilde 28 June 2013 09:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 29 June 2013 07:03:33AM *  4 points [-]

What about consequentialism? What if we'd get a benevolent AI as a reward?

We should never fight the hypothetical. If we get undesirable results in a hypothetical, that's important information regarding our decision algorithm. It's like getting a chance to be falsified and not wanting to face it. We could also just fight Parfit's Hitchhiker, Newcomb's. We shouldn't, and neither should we here.

Comment author: Dagon 29 June 2013 09:40:04AM 1 point [-]

Is there a difference between fighting the hypothetical and recognizing that the hypothetical is badly defined and needs so much unpacking that it's not worth the effort? This falls into the latter category IMO.

"Negative impact on happiness" is far too broad a concept, "theism" is a huge cluster of ideas, and the idea of harm/benefit on different individuals over different timescales has to be part of the decision. Separating these out enough to even know what the choice you're facing is will likely render the excercise pointless.

My gut feel is that if this were unpacked enough to be a scenario that's well-defined enough to really consider, the conundrum would dissolve (or rather, it would be as complicated as the real world but not teach us anything about reality).

Short, speculative, personal answer: there may be individual cases where short-term lies are beneficial to the target in addition to the liar, but they are very unlikely to exist on any subject that has wide-ranging long-term decision impact.