jhuffman comments on How can humans make precommitments? - Less Wrong

6 Post author: Incorrect 15 September 2011 01:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: jhuffman 15 September 2011 06:29:27PM 0 points [-]

Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn't applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics - no utility evaluation can exceed the existence of humanity.