FAWS comments on Unpacking the Concept of "Blackmail" - Less Wrong

25 Post author: Vladimir_Nesov 10 December 2010 12:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 10 December 2010 04:52:18AM *  0 points [-]

Agent 1 communicates that they will take option A if agent 2 takes option C and will take option B if agent 2 takes option D.

Correction: Retracted, likely wrong.

Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can't be sidestepped. And restatement of the problem can't sever ambient dependencies.

Comment author: FAWS 10 December 2010 05:20:50AM *  1 point [-]

Bias denied.

First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.

Second, I didn't make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.

Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that's a separate problem and should have no influence on whether the agent can recognize the relative utilities.

Comment author: Vladimir_Nesov 10 December 2010 01:57:19PM 1 point [-]

I wasn't thinking clearly; I don't understand this as an instance of explicit dependence bias now, though it could be. I'll be working on this question, but no deadlines.