Eugine_Nier comments on How can humans make precommitments? - Less Wrong

6 Post author: Incorrect 15 September 2011 01:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 15 September 2011 03:17:12AM 1 point [-]

You seem to be misunderstanding of the purpose of the "least convenient possible world". The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking "least convenient possible world" to self sabotage attempts to solve problems in the real world.

Comment author: Oscar_Cunningham 15 September 2011 09:38:48AM 5 points [-]

No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn't in fact negative utility. But unless you think that this will always be the case (it isn't) the request for you to move to the LCPW is valid.

Comment author: jhuffman 15 September 2011 06:29:27PM 0 points [-]

Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn't applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics - no utility evaluation can exceed the existence of humanity.