You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Oscar_Cunningham comments on How can humans make precommitments? - Less Wrong Discussion

6 Post author: Incorrect 15 September 2011 01:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: Oscar_Cunningham 15 September 2011 09:38:48AM 5 points [-]

No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn't in fact negative utility. But unless you think that this will always be the case (it isn't) the request for you to move to the LCPW is valid.

Comment author: jhuffman 15 September 2011 06:29:27PM 0 points [-]

Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn't applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics - no utility evaluation can exceed the existence of humanity.