Vladimir_Nesov comments on The Blackmail Equation - Less Wrong

13 Post author: Stuart_Armstrong 10 March 2010 02:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 10 March 2010 11:13:47PM *  0 points [-]

learning about the precommitment makes making an exception in just this one case "rational"

If you allow precommitments that are strategies, that react to what you learn (e.g. about other precommitments), you won't need any exceptions. You'd only have "blank" areas where you haven't yet decided your strategy.

Comment author: FAWS 10 March 2010 11:36:37PM 0 points [-]

Have I ever said anything else? I believe I mentioned agents that come into existence precommitted, and my very first post in this thread mentioned such a fully general, indistiguishable-from-strategy precommmitment. The case I described is the one where "precommitted first" makes sense. Which is also the sort of case in the original post. Obviously the precise timing of a fully general precommitment before the actors even learn about each other doesn't matter.

Comment author: Vladimir_Nesov 10 March 2010 11:58:35PM 0 points [-]

Agreed. (I assume by non-general precommitments -- timing of which matters -- you refer to specific nonconditional strategies that don't take into account anything -- obviously you won't want to make such a precommitment too early, or too late. I still think it's a misleading concept, as it suggests that precommitment imposes additional limitation on one's actions, while as you agree it doesn't when it isn't rational -- that is when you've made a "general precommitment" to avoid that.)

Comment author: FAWS 11 March 2010 12:17:17AM 0 points [-]

(I assume by non-general precommitments -- timing of which matters -- you refer to specific nonconditional strategies that don't take into account anything

I meant things like "I commit to one-box in Newcomb's problem" or "I commit not to respond to Baron Chastity's blackmail", specific precommitments you can only make after anticipating that situation. As a human it seems to be a good idea to make such a specific precommitment in addition to the general precommitment for the psychological effect (this is also more obvious in time travel scenarios), so I disagree that this is a misleading concept.

Comment author: Vladimir_Nesov 11 March 2010 12:22:23AM *  0 points [-]

For humans, certainty it's a useful concept. For rational agents, exceptions overwhelm.

Comment author: FAWS 11 March 2010 12:46:28AM 0 points [-]

Why should rational agents deliberately sabotage their ability to understand humans? Merely having a concept of something doesn't imply applying it to yourself. Not that I even see any noticeable harm in a rational agent applying the concept of a specific precommitment to itself. It might be useful for e. g. modeling itself in hypothesis testing.

Comment author: Vladimir_Nesov 11 March 2010 01:04:10AM 0 points [-]

Obviously.