tim comments on The Evil AI Overlord List - Less Wrong

27 Post author: Stuart_Armstrong 20 November 2012 05:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: tim 21 November 2012 06:40:10AM *  0 points [-]

My argument is more or less as follows:

  1. The act of agent A blackmailing agent B costs agent A more than not blackmailing agent B (at the very least A could use the time spent saying "if you don't do X then I will do Y" on something else).
  2. If A is an always-blackmail-bot then A will continue to incur the costs of futilely blackmailing B (given that B does not give in to blackmail).
  3. If the costs of blackmailing B (and/or following through with the threat) are not negative, then A should blackmail B (and/or follow through with the threat) regardless of B's position on blackmail. And by extension, agent B has no incentive to switch from his or her never-give-in strategy.
  4. If A inspects B and determines that B will never give in to blackmail, then A will not waste resources blackmailing B.
Comment author: Strange7 14 April 2014 02:01:39AM -1 points [-]

Blackmail, almost definitionally, only happens in conditions of incomplete information.