Vladimir_Nesov comments on Unpacking the Concept of "Blackmail" - Less Wrong

25 Post author: Vladimir_Nesov 10 December 2010 12:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 10 December 2010 01:59:02PM *  1 point [-]

I'm getting this more clearly figured out. In the language of ambient control, we have: You-program, Mailer-program, World-program, Your utility, Mailer utility

"Mailer" here doesn't mean anything. Anyone could be a mailer.

It is simpler with one mailer but this can be extended to a multiple-mailer situation.

We write your utility as a function of your actions and the mailer's actions based on ambient control. This allows us to consider what would happen if you changed one action and left everything else constant. If you would have a lower utility, we define this to be a "sacrificial action".

A "policy" is a strategy in which one plays a sacrificial action in a certain class of situation.

A "workable policy" is a policy where playing it will induce the mailer to model you as an agent that plays that policy for a significant proportion of the times you play together, either for:

  1. causal reasons - they see you play the policy and deduce you will probably continue to play it, or they see you not play it and deduce that you probably won't

  2. acausal reasons - they accurately model you and predict that you will/won't use the policy.

A "beneficial workable policy" is when this modeling will increase your utility.

Depending on the costs/benefits, a beneficial workable policy could be rational or irrational, determined using normal decision theory. The name people use for it is unrelated - people have given in to and stood up against blackmail, they have given in to and stood up against terrorism, they have helped those who helped them or not helped them.

Not responding to blackmail is a specific kind of policy that is frequently, when dealing with humans, workable. It deals with a conceptual category that humans create without fundamental decision-theoretic relevance.

Comment author: Vladimir_Nesov 10 December 2010 06:42:32PM 0 points [-]

We write your utility as a function of your actions and the mailer's actions based on ambient control. This allows us to consider what would happen if you changed one action and left everything else constant.

It doesn't (at least not by varying one argument of that function), because of explicit dependence bias (this time I'm certain of it). Your action can acausally control the other agent's action, so if you only resolve uncertainty about the parameter of utility function that corresponds to your action, you are being logically rude by not taking into account possible inferences about the other agent's actions (the same way as CDT is logically rude in only considering the inferences that align with definition of physical causality). Form this, "sacrificial action" is not well-defined.