Multiheaded comments on Open Thread, August 16-31, 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (313)
Some people might reasonably, and coherently, value valuing incoherent or unreachable values (in, so to say, compartmentalized good faith - that is, you might know that an algorithm is incoherent, prone to dutch-booking, etc, but it still feels just fine from the inside) - just as some people think that belief in belief might have worth of its own, are consciously hypocritical, etc.
Therefore, I'm against such one-level optimizing-away of already held values; if you see that some specific value is total mess, you might instead just compartmentalize a little, etc.
(I believe I've already mentioned the above to you at some point.)
BTW, a classic example of people valuing an unreachable value: "Love thy enemies". (Once I had an awesome experience meditating on it.)