Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Unpacking the Concept of "Blackmail"

25 Post author: Vladimir_Nesov 10 December 2010 12:53AM

Keep in mind: Controlling Constant Programs, Notion of Preference in Ambient Control.

There is a reasonable game-theoretic heuristic, "don't respond to blackmail" or "don't negotiate with terrorists". But what is actually meant by the word "blackmail" here? Does it have a place as a fundamental decision-theoretic concept, or is it merely an affective category, a class of situations activating a certain psychological adaptation that expresses disapproval of certain decisions and on the net protects (benefits) you, like those adaptation that respond to "being rude" or "offense"?

We, as humans, have a concept of "default", "do nothing strategy". The other plans can be compared to the moral value of the default. Doing harm would be something worse than the default, doing good something better than the default.

Blackmail is then a situation where by decision of another agent ("blackmailer"), you are presented with two options, both of which are harmful to you (worse than the default), and one of which is better for the blackmailer. The alternative (if the blackmailer decides not to blackmail) is the default.

Compare this with the same scenario, but with the "default" action of the other agent being worse for you than the given options. This would be called normal bargaining, as in trade, where both parties benefit from exchange of goods, but to a different extent depending on which cost is set.

Why is the "default" special here? If bargaining or blackmail did happen, we know that "default" is impossible. How can we tell two situations apart then, from their payoffs (or models of uncertainty about the outcomes) alone? It's necessary to tell these situations apart to manage not responding to threats, but at the same time cooperating in trade (instead of making things as bad as you can for the trade partner, no matter what it costs you). Otherwise, abstaining from doing harm looks exactly like doing good. A charitable gift of not blowing up your car and so on.

My hypothesis is that "blackmail" is what the suggestion of your mind to not cooperate feels like from the inside, the answer to a difficult problem computed by cognitive algorithms you don't understand, and not a simple property of the decision problem itself. By saying "don't respond to blackmail", you are pushing most of the hard work into intuitive categorization of decision problems into "blackmail" and "trade", with only correct interpretation of the results of that categorization left as an explicit exercise.

(A possible direction for formalizing these concepts involves introducing some kind of notion of resources, maybe amount of control, and instrumental vs. terminal spending, so that the "default" corresponds to less instrumental spending of controlled resources, but I don't see it clearly.)

(Let's keep on topic and not refer to powerful AIs or FAI in this thread, only discuss the concept of blackmail in itself, in decision-theoretic context.)

Comments (136)

Comment author: Pfft 10 December 2010 01:52:18AM 20 points [-]

I wonder if this question is related to the revulsion many people feel against certain kinds of price discrimination tactics. I mean things like how in the 19th century, train companies would put intentionally uncomfortable benches in the 3rd class carriages in order to encourage people to buy 2nd class tickets, or nowadays software that comes with arbitrary, programmed-in restrictions that can be removed by paying for the "professional" version.

People really don't like that! It seems like there is some folk-ethics norm that "if you can make me better off with no effort on your part, then you have an obligation to do so", which seems like part of a "no blackmail" condition.

Comment author: Tesseract 10 December 2010 11:34:06AM 7 points [-]

That makes sense from a reciprocal altruism perspective. If someone can benefit you at no cost to themself, and doesn't, that probably indicates a lack of intent to cooperate under all circumstances. The natural response is hostility.

Comment author: [deleted] 10 December 2010 03:34:50PM 5 points [-]

In Bombay, the only difference between first- and second- class cars is the price. The second-class cars are more crowded. I've been trying to think of a nice analogy to blackmail but didn't.

Comment author: Larks 10 December 2010 01:36:59AM 14 points [-]

It seems that we cry blackmail when a shelling point already exists, and the other agent is threatening to force us below it. The moral outrage functions as a precommittment to punish the clear defection.

In normal human life, 'do nothing' is the schelling point, because most people don't interact with most people. But sometimes the schelling point does move, and it seems what constitutes blackmail does too: if a child's drowning in a pond, and I tell you I'll only fish him out if you give me $1,000, it seems like I'm blackmailing you.

Sometimes both sides feel like they're being blackmailed though; like when firefighters go on strike, and both city hall and the union accuse the other of endangering people. Could this be put down to coordination problems?

Comment author: byrnema 10 December 2010 02:22:41AM *  8 points [-]

if a child's drowning in a pond, and I tell you I'll only fish him out if you give me $1,000, it seems like I'm blackmailing you.

Perhaps a borderline case like this is most helpful. Is this extortion? Even though the default case in this case isn't 'doing nothing'. The default is saving the child. Because that is what someone should do.

So maybe the word is difficult to unpack because it has morality behind it. A person shouldn't bomb your car, and shouldn't expose your private secrets. On the other hand, they needn't give you food, so it's OK to ask for money for that.

If I demand money for being faithful to my husband, than that is extortion because I'm supposed to be faithful. If, however, I want a divorce and would divorce him, I'm allowed to let him pay me for faithfulness. Such gray areas indicate to me that it is indeed about some notion of expected/moral behavior.

Selling food to starving families -- when they become so poor that you ought to give them food for free -- then that is extortion.

So: demanding more compensation when you should do it for less (or demanding any when you should do it for free).

Comment author: Vladimir_Nesov 10 December 2010 01:39:12AM *  3 points [-]

Don't even get me started on how ill-defined and far from being formally understood the concept of "Schelling point" is. It's very useful in informal game theory of course.

Comment author: Larks 10 December 2010 01:43:38AM *  1 point [-]

Yeah, I'm reading Strategy of Conflict at the moment. Still, it seems that working out Schelling points would give us blackmail, whilst understanding blackmail some other way wouldn't give us schelling points (as the latter can be without communication, etc.)

Comment author: nazgulnarsil 10 December 2010 02:03:43AM 1 point [-]

Schelling.

Comment author: Larks 10 December 2010 02:24:40AM 0 points [-]

fixed, cheers

Comment author: Will_Sawin 10 December 2010 02:04:33AM *  0 points [-]

A Schelling point is a kind of Nash equilbrium, right? It's the kind of equilibrium that an understanding of human psychology and the details of the situation says you should expect.

The union-firefighter looks like a variant on the hawk-dove/chicken game. If default is (Dove,Dove), which isn't an equilibrium, Hawk can be seen as a black mail action as it makes you worse of than default. So at (Hawk,Hawk) everyone is, in fact, being blackmailed, and this is, essentially, a coordination problem.

Comment author: SilasBarta 10 December 2010 04:21:38PM *  9 points [-]

This is a clever idea, but I don't think it works: you need to unpack the question of why a decision algorithm would deem cooperation non-optimal, and see if it coincides with a special class of problems where cooperation is generally non-optimal.

So I think what gets an offer labeled as blackmail is the recognition that cooperation would lead the other party to repeatedly use their discretion to force my next remaining options to be even worse. So blackmail and trade differ in that:

  • If I cooperate wth a blackmailer, they are more likely to spend resources "digging up dirt" on me, kidnapping my loved ones, etc. I don't want to be in that position, regardless of what I decide to do then.
  • If I trade with a trade-offerer, they are more likely to spend resources acquiring goods that I may want to trade for. I do want to be in the position where others make things available to me that I want (except for where I'd be competing with them in that process.)

And yes, these two situations are equivalent, except for what I want the offerer to do, which I think is what yields the distinction, not the concept of a baseline in the initial offer.

You can phrase blackmail as a sort of addiction situation where dynamic inconsistency potentially leaves me vulnerable to exploitation. My preferences at any time t are:

1) Not have an addiction.
2) Have an addiction, and take some more of the drug.
3) Have an addiction, and not take the drug.

where I'm addicted at time t, and taking the drug will make me addicted in time t+1 (and i otherwise won't be addicted in t+1).

In this light, one can view the classification of something as blackmail, as being any feeling or mechanism that makes me choose 3) over 2). "2 looks appealing, but I feel a strong compulsion to do 3." Agents with such a mechanism gain a resistance to dynamic inconsistency.

In contrast, if "addiction" were good, and the item in 1) were moved below 3) in my preference ranking, then I wouldn't benefit from a mechanism that makes me choose 3 over 2. That would feel like trade.

Comment author: Vladimir_Nesov 10 December 2010 04:37:07PM *  1 point [-]

And yes, these two situations are equivalent, except for what I want the offerer to do, which I think is what yields the distinction, not the concept of a baseline in the initial offer.

Yes, the distinction is in the way you prefer to acausally observation-counterfactually influence the other player. Not being offered a trade shouldn't be considered irrelevant by your decision algorithm, even if given the observations you have it is impossible. Like in Counterfactual Mugging, but with the other player instead of a fair coin. Newcomb's with transparent boxes is also relevant.

Comment author: SilasBarta 10 December 2010 04:53:39PM *  1 point [-]

Like in Counterfactual Mugging, but with the other player instead of a fair coin. Newcomb's with transparent boxes is also relevant.

Exactly, which is why I consider the hazing problem to be isomorphic to CM, and akrasia to be a special case of the hazing problem.

Comment author: Will_Sawin 10 December 2010 05:09:41PM 0 points [-]

Time-inconsistency seems unrelated. It may be a problem in implementing the strategy "don't respond to blackmail", but one can certainly TRY to blackmail a time-consistent person, if one believes them to be irrational or if they have only one blackmail-worthy secret.

Comment author: MBlume 10 December 2010 02:59:08AM *  9 points [-]

I know this isn't quite rigorous, but if I can calculate the counterfactual "what would the other player's strategy be if ze did not model me as an agent capable of responding to incentives," blackmail seems easy to identify by comparison to this.

Perhaps this can be what we mean by 'default'?

I think this ties into Larks' point -- if Larks didn't think I responded to incentives, I think ze'd just help the child, so asking me $1,000 would be blackmail. Clippy would not help the child, and so asking me $1,000 is trade.

To first order, this means that folks playing decision-theoretic games against me actually have an incentive to self-modify to be all-else-equal sadistic, so that their threats can look like offers. But then I can assume that they would not have so modified in the first place if they hadn't modelled me as responding to incentives, etc. etc.

Comment author: Will_Sawin 10 December 2010 03:07:16AM 5 points [-]

"an agent incapable of responding to incentives" is not a well-defined agent. What do you respond to? A random number generator? Subliminal messages? Pie?

Comment author: Alicorn 10 December 2010 03:34:45AM 6 points [-]

Pie?

I respond to pie. Are you offering pie?

Comment author: TheOtherDave 10 December 2010 03:56:26AM 9 points [-]

Should you find yourself in the greater Boston area, drop me a line and I will give you some pie.

(I suspect that there is a context to this comment, and I might even find it interesting if I were to look it up, but I'm sort of enjoying the comment in isolation. Hopefully it isn't profoundly embarrassing or anything.)

Comment author: rabidchicken 10 December 2010 03:42:27PM 2 points [-]

Can I take you up on that as well? You can never have too much pie.

Comment author: TheOtherDave 10 December 2010 03:49:29PM 3 points [-]

Well, you're certainly free to drop me a line if you're in the area, but I'm far less likely to respond, let alone respond with pie.

Comment author: Vladimir_Nesov 10 December 2010 03:20:44AM 1 point [-]

Which option for you is "not responding", the "default"? Maybe you give away $1000 by default, and since that leads to children not drowning, the better-valued outcome, it looks more like "least effort". How do you measure effort?

Comment author: Kingreaper 10 December 2010 01:16:39AM 6 points [-]

The default is special because it costs the other person time/money/effort to do anything other than the default.

Hence, not blowing up your car is the default, but so is not giving you food.

Comment author: atucker 10 December 2010 01:39:10AM 5 points [-]

I feel like what people call blackmail is largely related to intentionality. The blackmailer goes out of their way to harm you should you not cooperate.

In the trade example, whereas if someone wants to trade and you don't, and you need the object but don't trade, we don't blame that on the other person trying to harm you.

Comment author: [deleted] 10 December 2010 02:36:10AM 3 points [-]

Re intentionality, everyone knows about Knobe's experiment, right?

http://www.youtube.com/watch?v=sHoyMfHudaE

Comment author: Larks 10 December 2010 01:41:08AM 1 point [-]

Good point - yet it seems that the costs must ultimately be analysed as opportunity cost. Game theoretically, as reducing their payout to reduce yours. However, if a crazy person who enjoys blowing up cars tells you to give them $10,000 or they'll blow up your car, it's both the case that 1) You're being blackmailed and 2) They would benefit from (prefer to) blow up your car.

Comment author: Vladimir_Nesov 10 December 2010 01:45:01AM *  0 points [-]

When you are presented with blackmail, or with trade, the "default" is not what actually happens, it's impossible, might be as well logically counterfactual (and you know that, even if the other agent can't). If all we know is that it's counterfactual, then we might as well consider "non-default" everything that has opportunity cost compared to the equally counterfactual "default" of Flying Spaghetti Monster granting you $1000.

Comment author: Kingreaper 10 December 2010 08:00:38AM *  5 points [-]

The person blackmailing you doesn't have the option of having the FSM grant them $1000

They do have the option of not blackmailing you.

Just because they are blackmailing you doesn't make them not blackmailing you impossible. If they wanted not to blackmail you, they wouldn't be blackmailing you.

The whole point of precommitting not to give in to blackmail, and not to negotiate with terrorists, is the fact that they have the option to do nothing, and if you're not going to give in, they're better off sticking with that option.

So, you precommit not to give in, and this decreases the chance that you'll be threatened in the first place.

Comment author: Vladimir_Nesov 10 December 2010 01:38:39PM *  0 points [-]

The person blackmailing you doesn't have the option of having the FSM grant them $1000

They do have the option of not blackmailing you.

The question is, what's the difference between the two, formally? Neither actually happened, both are counterfactual. (The assumption is that you are already facing a blackmail attempt, trying to decide whether to give in.)

This refers to a significant surprising conclusion in decision theory (at least, UDT-style): which action is correct depends on how you reason about logically impossible situations, so it's important to reason about the logically impossible situations correctly. But it's still not clear where the criteria for correctness of such reasoning should come from.

Comment author: Kingreaper 10 December 2010 01:51:20PM *  2 points [-]

The question is, what's the difference between the two, formally?

One is a case where a precommitment makes a difference, the other isn't.

Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.

Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn't exist.

*(which is not an impossible counterfactual+ it's something that could have happened, with only relatively minor changes to the world.)

+[unless you want to define "impossible" such that anything which doesn't happen was impossible, at which point it's not an unpossible counterfactual, and I'm annoyed :p]

which action is correct depends on how you reason about logically impossible situations

A logically impossible situation is one which couldn't happen in any logically consistent world. There are plenty of logically consistent worlds in which the person blackmailing you instead doesn't.

So, it's definitely not logically impossible. You could call it impossible (though, as above, that non-standard usage would irritate me) but it's not logically impossible.

Comment author: jimrandomh 10 December 2010 02:13:57PM 1 point [-]

Couldn't you also convincingly precommit to accept the corresponding positive-sum trade?

Comment author: Kingreaper 10 December 2010 02:59:16PM *  4 points [-]

Yes. But why would you need to? In the positive-sum trade scenario, you're gaining from the trade, so precommitting to accept it is unnecessary.

If you mean that I could precommit to only accept extremely favourable terms; well if I do that, they'll choose someone else to trade with; just as the threatener would choose someone else to threaten

Them choosing to trade with someone else is bad for me. The threatener choosing someone else to threaten is good for me.

/\ That is, in many ways, the most important distinction between the scenarios. I want the threatener to pick someone else. I want the trader to pick me.

Comment author: Vladimir_Nesov 18 December 2010 11:25:06PM *  0 points [-]

One is a case where a precommitment makes a difference, the other isn't.

Obviously, the question is, why, what feature allows you to make that distinction.

Had you convincingly precommitted not to giving in to blackmail* you would not have been blackmailed.

Had you convincingly precommitted to getting the FSM to grant your blackmailer $1000, the FSM still wouldn't exist.

The open question is how to reason about these situations and know to distinguish them in such reasoning.

A logically impossible situation is one which couldn't happen in any logically consistent world.

"Worlds" can't be logically consistent or inconsistent, at least it's not clear what is the referent of the term "inconsistent world", other than "no information".

And again, why would one care about existence of some world where something is possible, if it's not the world one wants to control? If the definition of what you care about is included, the facts that place a situation in contradiction with that definition make the result inconsistent.

Comment author: Kingreaper 19 December 2010 12:24:58AM 0 points [-]

Obviously, the question is, why, what feature allows you to make that distinction.

Well, in one case, there are a set of alterations you could make to your past self's mind that would change the events.

In the other, there aren't.

And again, why would one care about existence of some world where something is possible, if it's not the world one wants to control?

Because it allows you to consistently reason about cause and effect efficiently.

Comment author: Vladimir_Nesov 19 December 2010 12:26:25AM 0 points [-]

Because it allows you to consistently reason about cause and effect efficiently.

If it's not about the effect in the actual world, why is it relevant?

Comment author: Kingreaper 19 December 2010 12:57:19AM -1 points [-]

If it's not about the effect in the actual world, why is it relevant?

If I ask "What will happen if I don't attempt to increase my rationality" I'm reasoning about counterfactuals.

Is that not about cause and effect in the real world?

Counterfactuals ARE about the actual world. They're a way of analysing the chains of cause and effect.

If you can't reason about cause and effect (and with your inability to understand why precomitting can't bring the FSM into existence, I get the impression you're having trouble there) you need tools. Counterfactuals are a tool for reasoning about cause and effect.

Comment author: Manfred 19 December 2010 12:23:53AM 0 points [-]

Only one of possible-to-not-blackmail or his-noodliness-exists is consistent with the evidence, to very high probabilities.

Worlds, in the Tegmark-ey sense of a collection of rules and initial conditions, can quite easily be consistent or inconsistent.

You seem to be beating a confusing retreat here. I bet there's a better tack to take.

Comment author: Strange7 10 December 2010 04:45:54PM 1 point [-]

The question is, what's the difference between the two, formally? Neither actually happened, both are counterfactual. (The assumption is that you are already facing a blackmail attempt, trying to decide whether to give in.)

The blackmailer has the option of backing down at any point, and letting you go for free. It may be unlikely, but it's not logically impossible.

"Give me $1000 or I'll blow up your car!"

"I have a longstanding history of not negotiating with terrorists. In fact, last month someone slashed my tires because I wouldn't give them $20. Check the police blotter if you don't believe me."

"Oh, alright. I'll just take my bomb and go hassle someone more tractable."

Comment author: Vladimir_Nesov 10 December 2010 04:54:40PM *  -1 points [-]

It may be unlikely, but it's not logically impossible.

Assume it is, as part of the problem statement. Only allow agent-consistency (the agent can't prove otherwise) of it being possible for the other player to not blackmail, without allowing actual logical consistency of that event. Also, assume that our agent has actually observed that the other decided to blackmail, and there is no possibility of causal negotiation.

(This helps to remove the wiggle-room in foggy reasoning about decision-making.)

Comment author: saturn 10 December 2010 07:27:19PM 4 points [-]

It seems to me the relevant difference is that in blackmail one or both parties end up worse off. So a group of individuals who blackmail each other tend to get poorer over time, compared to a group that successfully deters blackmail.

Comment author: FormallyknownasRoko 10 December 2010 11:55:46PM *  3 points [-]

Suppose that Blackmail is

merely an affective category, a class of situations activating a certain psychological adaptation

-- then we should ask what features of the ancestral environment caused us to evolve it. We might understand it better in that case.

I suspect that the ancestral environment came with a very strong notion of a default outcome for a given human, in the absence of there being any particular negotiation, and also came with a clear notion of negative interaction (stabbing, hitting, kicking) versus positive interaction (giving fish, teaching how to hunt better, etc).

Comment author: cousin_it 12 December 2010 05:49:28PM *  0 points [-]

Uh, spending effort on hurting people is negative-sum and most likely lose-lose, while teaching someone to hunt is positive-sum lose-win. Or maybe you see some deeper mystery here that I'm not seeing?

Comment author: FormallyknownasRoko 12 December 2010 05:51:58PM 1 point [-]

The problem with "lose-lose" is that it relies upon there being a "defualt outcome given no interaction". Vladimir is trying to taboo this concept, at least in general. So I am going to focus on a relevant special case, namely specific interactions available in the ancestral environment.

Comment author: PhilGoetz 13 December 2010 08:48:53PM 0 points [-]

Uh, spending effort on hurting people is negative-sum and most likely lose-lose

What ancestral environment are you thinking of?

Comment author: Vladimir_Nesov 10 December 2010 05:07:21PM *  5 points [-]

Current guess.

Blackmailing is a class of situations similar to Counterfactual Mugging, where you are willing to sacrifice utility in the actual world, in order to control its probability into being lower, so that the counterfactual worlds (that have higher utility) will gain as much probability as possible, and will thus improve the overall expected utility, even as utility of the actual world becomes lower.

Or, simply, you are being blackmailed when you wish this wouldn't be happening, and the correct actions are those that make the reality as improbable as possible.

(In Counterfactual Mugging, you are sacrificing utility in the actual world in order to improve utility of the counterfactual world, while in blackmailing, you are doing the same in order to improve its probability.)

Comment author: Stuart_Armstrong 10 December 2010 06:33:01PM 1 point [-]

This definition is too broad. It fits the person doing the blackmailing (in a world where you reject my threat, I will act against my local best interest and blow up the bombs), just as well as the person being blackmailed (in a world where you have precommited to bomb me, I will act against my local self-interest and defy you). It fits many type of negotiations over deals and such.

Comment author: Vladimir_Nesov 10 December 2010 08:14:41PM 0 points [-]

It fits the person doing the blackmailing (in a world where you reject my threat, I will act against my local best interest and blow up the bombs), just as well as the person being blackmailed (in a world where you have precommited to bomb me, I will act against my local self-interest and defy you).

You omit some counterfactuals by framing them as located outside the scope of the game. If you return them, the pattern no longer fits. For example, the blackmailer can decide to not blackmail on both sides of victim's decision to give in, so the utility of counterfactuals outside the situation where blackmailer decided to blackmail and the victim didn't give in is still under blackmailer's control, which it shouldn't be according to the pattern I proposed.

Comment author: Stuart_Armstrong 10 December 2010 08:30:38PM *  0 points [-]

I don't quite see your point. If you take a nuclear blackmailer, then it follows the same pattern: he is committing to a locally negative course (blowing up nukes that will doom them both) so that the probability of that world is diminished, and the probability of the world where his victim gives in goes up. How does this not follow your pattern?

Comment author: Vladimir_Nesov 10 December 2010 08:52:55PM -1 points [-]

You assume causal screening off, but humans think acausally, with no regard for observational impossibility, which is much more apparent in games. If after you're in the situation of having unsuccessfully blackmailed the other, you can still consider not blackmailing (in particular, if blackmail probably doesn't work), then you get a decision that changes utility of the collection of counterfactuals outside the current observations, which blackmailed (by my definition) are not granted. The blackmailed have to only be able to manipulate probability of counterfactuals, not their utility. (That's my guess as to why our brains label this situation "not getting blackmailed".)

Comment author: Stuart_Armstrong 11 December 2010 10:13:53AM 1 point [-]

I need examples to get any further in understanding. Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?

Comment author: Vladimir_Nesov 11 December 2010 12:29:15PM -1 points [-]

Can you give a toy model that is certainly blackmail according to your definition, so that I can contrast it with other situations?

I don't understand. Simple blackmail is certainly blackmail. The problem here seemed to be with games that are bigger than that, why do you ask about simple blackmail, which you certainly already understood from my first description?

Comment author: Will_Sawin 10 December 2010 01:58:22AM 5 points [-]

Isn't it because you want to incentivize people to bargain with you but incentivize them not to blackmail you?

Comment author: Vladimir_Nesov 10 December 2010 02:11:43AM 1 point [-]

Isn't what?

Comment author: Will_Sawin 10 December 2010 02:12:19AM 1 point [-]

Why you shouldn't respond to blackmail.

Comment author: Vladimir_Nesov 10 December 2010 02:17:45AM 1 point [-]

This doesn't help with answering the question of what "blackmail" means, and how useful it is for decision theory. Or alternatively, expresses the hypothesis that "blackmail" category is a trivial restating of the decision, not a property of the decision problem.

Comment author: Will_Sawin 10 December 2010 02:23:50AM 2 points [-]

No, it expresses the hypothesis that blackmail occurs when:

a. Someone has harmed you relative to default. b. You now have a choice between something worse for them and something better for them. c. In the short term, you'd prefer what was better than them.

If "a" fails, either they did what was good for you and you should reward them, or they played default and you shouldn't try to punish them, as 1) you can't send a clear signal that you punish people like that and 2) you're likely to get punished yourself.

If "b" fails, not a whole lot you can do about it.

If "c" fails, it's not "failing to respond to blackmail", it's "not being stupid."

The trick with extortion & terrorism is that you give someone a sufficient direct incentive to help you. The incentive being "or else I'll reveal your secrets/blow up your building/punch your baby/whatever." The reason the advice is given is because it's nonobvious whether you should negotiate or stick to a policy of not negotiating.

Sometimes people do give in to blackmail. For example, some countries paid off the Somali pirates, while the US fought them off. It's a strategic choice with a nonobvious answer. This is because "don't give in to blackmail" is not universally applicable advice.

Comment author: Vladimir_Nesov 10 December 2010 02:27:32AM 0 points [-]

Remember, we don't know what "default" is.

Comment author: ShardPhoenix 10 December 2010 09:41:31AM 4 points [-]

Isn't the default just what would happen if the other person never communicated with you?

Comment author: Vladimir_Nesov 10 December 2010 01:45:33PM -1 points [-]

But they did communicate with you, as a result of a somewhat deterministic decision process, and not by random choice. How should you reason about this counterfactual? Why doesn't the "false" assumption of their never communicating with you imply that the Moon is made out of cheese?

Comment author: ShardPhoenix 11 December 2010 05:50:17AM 3 points [-]

People engage in this kind of counterfactual reasoning all the time without declaring the moon to be made of cheese; I'm not sure why you're questioning it here. If it makes it any easier, think of it as being about the change in expected value immediately after the communication vs. the expected value immediately before the communication - in other words, whether the communication is a positive or negative surprise.

Comment author: Vladimir_Nesov 11 December 2010 12:25:38PM 0 points [-]

People engage in this kind of counterfactual reasoning all the time without declaring the moon to be made of cheese

Indeed. How do they manage that? That's one fascinating question.

Comment author: Kingreaper 10 December 2010 02:07:33PM *  3 points [-]

When reasoning about counterfactuals a good principle is never to reach to a more distant* world than necessary.

*(less similar)

If you were to simulate the universe as it was before they contacted you, and make 1 single alteration (tapping their brain so they decide not to contact you) would the simulation's moon be made of green cheese?

That universe is pretty much the closest possible universe to ours where they don't contact you.

Comment author: Vladimir_Nesov 18 December 2010 11:20:21PM -1 points [-]

Why are merely similar worlds ought to be relevant at all? There could be ways of approximate reasoning about complicated definition of the actual world you care about, but actually normatively caring about the worlds that you know not to be actual (i.e. the one you actually care about) is a contradiction of terms.

Comment author: Will_Sawin 10 December 2010 01:59:02PM *  1 point [-]

I'm getting this more clearly figured out. In the language of ambient control, we have: You-program, Mailer-program, World-program, Your utility, Mailer utility

"Mailer" here doesn't mean anything. Anyone could be a mailer.

It is simpler with one mailer but this can be extended to a multiple-mailer situation.

We write your utility as a function of your actions and the mailer's actions based on ambient control. This allows us to consider what would happen if you changed one action and left everything else constant. If you would have a lower utility, we define this to be a "sacrificial action".

A "policy" is a strategy in which one plays a sacrificial action in a certain class of situation.

A "workable policy" is a policy where playing it will induce the mailer to model you as an agent that plays that policy for a significant proportion of the times you play together, either for:

  1. causal reasons - they see you play the policy and deduce you will probably continue to play it, or they see you not play it and deduce that you probably won't

  2. acausal reasons - they accurately model you and predict that you will/won't use the policy.

A "beneficial workable policy" is when this modeling will increase your utility.

Depending on the costs/benefits, a beneficial workable policy could be rational or irrational, determined using normal decision theory. The name people use for it is unrelated - people have given in to and stood up against blackmail, they have given in to and stood up against terrorism, they have helped those who helped them or not helped them.

Not responding to blackmail is a specific kind of policy that is frequently, when dealing with humans, workable. It deals with a conceptual category that humans create without fundamental decision-theoretic relevance.

Comment author: Vladimir_Nesov 10 December 2010 06:42:32PM 0 points [-]

We write your utility as a function of your actions and the mailer's actions based on ambient control. This allows us to consider what would happen if you changed one action and left everything else constant.

It doesn't (at least not by varying one argument of that function), because of explicit dependence bias (this time I'm certain of it). Your action can acausally control the other agent's action, so if you only resolve uncertainty about the parameter of utility function that corresponds to your action, you are being logically rude by not taking into account possible inferences about the other agent's actions (the same way as CDT is logically rude in only considering the inferences that align with definition of physical causality). Form this, "sacrificial action" is not well-defined.

Comment author: benelliott 10 December 2010 05:35:29PM 0 points [-]

I think you're mostly right. This suggests that a better policy than 'don't respond to blackmail' is 'don't respond to blackmail if and only if you believe the blackmailer to be someone who is capable of accurately modelling you'.

Unfortunately this only works if you have perfect knowledge of blackmailers and cannot be fooled by one who pretends to be less intelligent than they actually are.

This also suggests a possible meta-strategy for blackmailers, namely "don't allow considerations of whether someone will pay to affect your decision of whether to blackmail them", since if blackmailers were known to do this then "don't pay blackmailers" would no longer work.

I would also suggest that while blackmail works with some agents and not others, it isn't human-specific. For example, poison arrow frogs seem like a good example of evolution using a similar strategy, having an adaptation that is in no way directly beneficial (and presumably is at least a little costly) that exists purely to minimize the utility of animals which do not do what it wants.

Comment author: Will_Sawin 10 December 2010 02:36:18AM 0 points [-]

Reducing the problem to a smaller problem, or another, already-existing problem, in a way that seems nonobvious to fellow lesswrongers (and therefore possibly wrong) is useful.

For example, my way resolves, or mostly resolves the blackmail/bargain distinction. Blackmail is when the pre-made choice is bad for you relative to the most reasonable other option, bargain is when it's good for you.

Maybe I can explain what's going on game-theoretically when I say "default" in this context.

You're trying to establish a Nash equilibrium of, for actions in that category X:

You don't do X// I punish you for doing X

Now the Schelling situation is that you may not be able to reach this equilibrium, if X is a strange and bizarre category, for instance, or if we'd prefer to prevent you from punishing us by locking you up instead.

So it may be that there is no one general category here. I could give in to terrorism but not blackmail, for instance. It's about clusters in harmful-action-space.

Comment author: Alicorn 10 December 2010 01:31:37AM 8 points [-]

I really wish "blackmail" were not used to mean extortion.

Comment author: Perplexed 10 December 2010 01:43:51AM 6 points [-]

I had the same reaction, thinking blackmail is a special form of extortion in which the threat is a threat of exposure. But when I sought support from the dictionary, I was disappointed

Comment author: shokwave 10 December 2010 05:42:24PM *  2 points [-]

Dictionaries are histories of usage; not arbiters of meaning. If they were, language would not change in meaning (only add new words) from the moment the first dictionaries were made.

See here

Comment author: wedrifid 10 December 2010 04:55:42AM 2 points [-]

That is surprising. It seems that using 'blackmail' to refer to extortion isn't even a corruption of the original use.

Comment author: jfm 10 December 2010 06:38:36PM 1 point [-]

Indeed, we have this account of the etymology from George MacDonald Fraser's The Steel Bonnets:

Deprived of the protection of law, neglected by his superiors, and too weak to resist his despoilers, the ordinary man's only course was the payment of blackmail. This practice is probably as old as time, but the expression itself was coined on the Borders, and meant something different from blackmail today. Its literal meaning is "black rent" --- in other words, illegal rent -- and its exact modern equivalence is the protection racket.

Blackmail was paid by the tenant or farmer to a "superior" who might be a powerful reiver, or even an outlaw, and in return the reiver not only left him alone, but was also obliged to protect him from other raiders and to recover his goods if they were carried off.

Note that he does consider the modern meaning to be more specialized.

Comment author: Alicorn 10 December 2010 01:49:44AM 2 points [-]

They are certainly used synonymously often enough to get into the dictionary that way. I didn't say it was wrong, I said I wish it weren't used that way.

Comment author: Vladimir_Nesov 10 December 2010 01:37:43AM 3 points [-]

Sorry, it's already prevailing terminology.

Comment author: shokwave 10 December 2010 05:45:29PM *  0 points [-]

If you have a case for why it is bad for 'blackmail' to mean 'extortion' (ie you can demonstrate that precision is desirable or something) then make the case. If it's a good case (I expect it will be; 4 karma points on a new-ish article at time of this comment suggests it is widely recognised) then people - most definitely me included - will start making the distinction you wish for.

(This is how language - prevailing terminology - changes! Ain't it cool?)

Comment author: Alicorn 10 December 2010 06:09:19PM 13 points [-]

In general, I think synonyms are bad. It's a waste of vocabulary to have two words that mean the same thing in the same language unless there is something meaningfully different about them (connotation, scope, flavor, nuance, something). When "blackmail" just means "extortion", and not a kind of extortion (the threat to reveal incriminating information), the words become synonyms, instead of one of them being a special case of the other.

Comment author: ciphergoth 13 December 2010 06:08:14AM 5 points [-]

Yes, I have a similar rule. "Disinterested" has been used to mean "uninterested" for all of its history IIRC, but I support efforts to stop using it that way and keep it for its distinct meaning of "with no stake in the outcome" because synonyms are wasteful.

Comment author: Alicorn 13 December 2010 01:09:20PM *  0 points [-]

I agree in principle, but in practice I fudge this when the meaning is clear from context, because I hate the rhythm of "uninterested". (I use "not interested" instead when I can, but sometimes it sounds more graceful to use "disinterested", and sometimes I do it. Maybe I should try harder to stop.)

Comment author: shokwave 10 December 2010 06:36:53PM 4 points [-]

Agreed. From now on I will use blackmail to refer to extortion involving the threat to reveal incriminations, and if I encounter confusion, I will either direct them to this discussion or use rhetoric / appeal to my own authority to convince them of the truth of my position, depending on which I judge to have the better chance of actually convincing them.

Sorry to be so formal and spell it all out, but I just recently worked this unconscious process out and I am bursting with enthusiasm to share it!

(Note that the field of linguistics uses the phrase 'perfect synonym' to refer to what you mean by synonym, and when they say synonym they allow possible variances of nuance. Note also that I think their definitions are not in touch with the definitions for 'synonym' that people actually use, so more fool them.)

Comment author: TheOtherDave 10 December 2010 06:39:49PM 5 points [-]

So "synonym" in common usage is an perfect synonym for "perfect synonym"?

Comment author: shokwave 10 December 2010 07:00:16PM 2 points [-]

Hahahahaha - yes!

Comment author: Alicorn 10 December 2010 06:50:56PM 3 points [-]

Sorry to be so formal and spell it all out, but I just recently worked this unconscious process out and I am bursting with enthusiasm to share it!

Not at all, it's nifty. I'm sort of tickled to have discovered someone who will use words how I want them if I explain why they should.

Comment author: katydee 10 December 2010 08:03:22PM *  1 point [-]

Do most people not do that? In my experience if I tell people not to do certain things (as long as the things aren't too ridiculous-- I have no expectation that anyone would stop breathing because KATYDEE COMMANDS IT), they stop doing those things, or at least stop doing them around me. There are some irritating exceptions-- the number of people who respond "Why?" to "Be quiet" or "Don't talk to me" is staggeringly high-- but by and large people tend to respect such preferences in my experience.

Comment author: Alicorn 10 December 2010 08:16:03PM 3 points [-]

I wouldn't have been uncommonly impressed if shokwave had agreed to use "blackmail" and "extortion" as I prefer while talking to me (although the local context makes that sort of acquiescence less likely than it would be in most social groups, I think). But the great-grandparent seems to indicate a commitment to use the words the way I like them in all contexts and to go so far as to evangelize my linguistic beliefs.

Comment author: SilasBarta 10 December 2010 08:36:23PM *  0 points [-]

Do most people not do that?

Most people will indeed adopt different terminology, given a good reason; it's just that some people have extensive experience of others not complying with such requests because the reasons are ridiculous, and then infer such rejection to be a more general phenomenon.

Example:

A: [Activity X] will tend to make you more sexually attractive to [group Y] because of [mechanism Z].
B: You shouldn't say that because it's offensive to Ys and treats them like non-persons mindlessly responding to X, and I don't like that. And I don't like X, either.
C: Are you insane? I can't ignore real-world social phenomena that affect my life like what A described, just because it offends you and you have unusual preferences. Try to think about how others might feel.
B: Bah! Blast these terrorists who won't listen to the voice of reason! Where can I find less defective people?

Comment author: [deleted] 04 February 2012 05:35:32PM 0 points [-]

Note that the field of linguistics uses the phrase 'perfect synonym' to refer to what you mean by synonym, and when they say synonym they allow possible variances of nuance.

Anyway, it depends on how much variance of nuance you want to allow. (Does the fact that extortion is Latinate and blackmail is Germanic count for anything?) I've seen a claim that no language has truly perfect synonyms (i.e. two words such that P(X|someone says word1) = P(X|someone says word2) for all X in all circumstances), which might well be true, but which would make the phrase perfect synonym useless.

Comment author: FAWS 10 December 2010 04:07:09AM *  4 points [-]

Agent 1 negotiates with agent 2. Agent 1 can take option A or B, while agent 2 can take option C or D. Agent 1 communicates that they will take option A if agent 2 takes option C and will take option B if agent 2 takes option D.

If utilities are such that for

  • agent 1: A > B, C < D, A+C < B + D

and for

  • agent 2: A < B, C > D, A+C < B + D

or

  • agent 1: A < B, C > D, A+C > B + D
  • agent 2: A > B, C < D, A+C > B + D

this is an offer.

If

  • agent 1: A < B, C < D, A+C < B + D
  • agent 2: A < B, C > D, A+C < B + D

or

  • agent 1: A > B, C > D, A+C > B + D
  • agent 2: A > B, C < D, A+C > B + D

this is blackmail by agent 1.

If

  • agent 1: A > B, C < D, A+C < B + D
  • agent 2: A < B, C < D, A+C < B + D

or

  • agent 1: A < B, C > D, A+C > B + D
  • agent 2: A > B, C > D, A+C > B + D

this is agent 1 giving in to agent 2's blackmail.

I don't think I mentioned anything about any "default" anywhere?

(Unless I overlooked something in the other cases there is either no reason to negotiate, no prospect of success in negotiating or at least one party acting irrationally. It is implicitly assumed that preferences between combinations of the options only depend on the preferences between the individual options. )

Comment author: Eugine_Nier 10 December 2010 05:54:58AM 4 points [-]

Notice that under this definition punishing someone for a crime is a form of blackmail.

Comment author: FAWS 10 December 2010 06:18:19AM 3 points [-]

I'm not sure that's a problem.

Or maybe: Change blackmail in the above to threat, and define blackmail as a threat not legitimized by social conventions.

Comment author: Eugine_Nier 10 December 2010 06:23:33AM 3 points [-]

Well, at least we've unpacked the concept of "default" into the concept of social conventions.

Comment author: HughRistik 10 December 2010 08:54:33PM 1 point [-]

Or into a concept of ethics. Blackmail involves a threat of unethical punishment.

Comment author: rwallace 10 December 2010 11:36:45AM 0 points [-]

I think we can do better than that. In cases where the law is morally justified, punishing someone for a crime is retaliation. I think part of the intent of the concept of blackmail is that the threatened harm be unprovoked.

Comment author: Vladimir_Nesov 10 December 2010 04:52:18AM *  0 points [-]

Agent 1 communicates that they will take option A if agent 2 takes option C and will take option B if agent 2 takes option D.

Correction: Retracted, likely wrong.

Explicit dependence bias detected. How agent 1 will decide generally depends on how agent 2 will decide (not just on the actual action, but on the algorithm, that is on how the action is defined, not just on what is being defined). In multi-agent games, this can't be sidestepped. And restatement of the problem can't sever ambient dependencies.

Comment author: nshepperd 10 December 2010 05:19:57AM 2 points [-]

I don't see how that's relevant. "I will release the child iff you give me the money, otherwise kill them" still looks like blackmail in a way "I will give you the money iff you give me the car, otherwise go shopping somewhere else" does not, even once the agents decided for whatever reason to make their dependencies explicit.

Comment author: FAWS 10 December 2010 05:20:50AM *  1 point [-]

Bias denied.

First I make no claims about the outcome of the negotiation so there is no way privileging any dependence over any other could bias my estimation thereof.

Second, I didn't make any claim about any actual dependence, merely about communication, and it would certainly be in the interest of a would-be blackmailer to frame the dependence in the most inescapable way they can.

Third, agent 2 would need to be able to model communicated dependencies sensibly no matter whether it has a concept of blackmail or not, but while how it models the dependence internally would have a bearing on whether the blackmail would be successful that's a separate problem and should have no influence on whether the agent can recognize the relative utilities.

Comment author: Vladimir_Nesov 10 December 2010 01:57:19PM 1 point [-]

I wasn't thinking clearly; I don't understand this as an instance of explicit dependence bias now, though it could be. I'll be working on this question, but no deadlines.

Comment author: Stuart_Armstrong 10 December 2010 03:20:56PM 2 points [-]

Why is the "default" special here?

Because in a blackmail, I do not wish the trade to happen at all. Let the "default" outcome for a trade T be one where the trade doesn't happen. Assume that my partner (the Baron) gets to decide whether T happens or not.

If T is a blackmail, then every option is worse than not-T. So, if I can commit to ensuring that T is also negative for the Baron, then the Baron won't let T happen. This gives a definition for blackmail: a trade T where every option is worse than not-T, but where I can commit to actions that ensure that T is negative for the person that decides whether T happens or not.

Let's contrast this with another trade T, with no blackmail elements to it, where I am a monopolist or monopsonist. It is still to my advantage to credibly commit to rejecting everything if I don't get 99% of the profit. However, I am limited by the fact that I want the trade to happen; I can't commit to any option that is actually harmful to the Baron. He will trade with me as long as he doesn't lose; his 'default' ensures that I have to give him something.

Finally, most trades are not monopolist or monopsonist. In this case, it is not to my advantage to precommit to taking more than "my fair (market) share" of the profit, as that will cause the trade to fail; the Baron's default is higher (he can trade with others) so I have to offer him at least that.

Now, I don't want to go down the rabbit hole of dueling pre-commitments, or the proper decision-theoretic way of resolving the issue (blackmailing someone or precommiting to avoid blackmail are very similar processes). But it does show why you would want to precommit to a particular action in blackmail situations, but not in others: you do not control if the trade happens, and blackmails are trades that you do not want to see happen. You can call not-trading the 'default' if you wish, but the salient fact is that it is better for you, not that it is default.

Comment author: Vladimir_Nesov 10 December 2010 03:47:44PM *  0 points [-]

Because in a blackmail, I do not wish the trade to happen at all.

Something has to happen, and you must choose from the options you're dealt. Maybe I don't wish to pay for my Internet connection, and would rather have the Flying Spaghetti Monster provide it to me free of charge, and also grant me $1000 as a bonus? This seems to qualify as not wishing I had to choose a provider at all. But in reality, I have to choose, and FSM is not available as an option, just as not being blackmailed is not available as an option (by assumption; the agent doesn't need to know that, only the problem statement that logically implies that).

Comment author: benelliott 10 December 2010 04:55:18PM 2 points [-]

The difference is that since blackmail is costly, there is no incentive to blackmail someone who will not give into it, which makes people who won't give in better off than people who will. On the other hand, there is no incentive for a company to offer free services to someone who refuses to 'give in' and pay money.

I think the logic is along the lines of "make the decision which, if the other party knew you were going to make it, would maximise your expected utility".

Comment author: Will_Sawin 10 December 2010 05:12:25PM 0 points [-]

Which shows exactly why the rule is not universally applicable - the other party does not, in general, know what decision you're going to make (though they can predict it to some level of accuracy), and so there's a cost/benefit situation.

I am going to try and save my attempted solution ( http://lesswrong.com/lw/39a/unpacking_the_concept_of_blackmail/342c?c=1 ) from being stuck at the bottom of the thread. This might be inappropriate behavior, and if so please inform me.

Comment author: Stuart_Armstrong 10 December 2010 05:49:27PM 0 points [-]

I think you've answered your own question in your comment.

Comment author: Vladimir_Nesov 10 December 2010 05:57:31PM 0 points [-]

Yes, I was not entirely straightforward in my questions and wished to elicit some clarity from others. Here, the key is the difference between observational (logical) impossibility and agent-provable impossibility.

Comment author: wedrifid 10 December 2010 05:11:09AM 2 points [-]

It's an interesting question. My thoughts follow a similar path and I like the way you described it here:

My hypothesis is that "blackmail" is what the suggestion of your mind to not cooperate feels like from the inside, the answer to a difficult problem computed by cognitive algorithms you don't understand, and not a simple property of the decision problem itself.

Taking a step back from the internal viewpoint we can also give a workable description in social terms. Which way people will tend to think of the decision offered is primarily determined by social dominance. Apart from the relative status of the actors themselves the status of the threatened negative action makes a difference on whether someone thinks 'extortion'. Given an approximately equivalent payoff matrix in terms of utility two different scenarios could be categorised differently with respect to extortion because the decision maker instinctively associates various different levels of 'legitimacy'.

Comment author: TheOtherDave 10 December 2010 03:21:01PM *  2 points [-]

My take: what we call "extortion" or "blackmail" is where agent A1 offers A2 a choice between X and Y, both of which are harmful to A2, and where A1 has selected X to be less harmful to A2 than Y with the intention of causing A2 to choose X.

"Not responding to blackmail" comprises A2 choosing Y over X whenever A2 suspects this is going on.

A1 can still get A2 to choose X over Y, even if A2 has a policy of not responding to blackmail, by not appearing to have selected X... that is, by not appearing to be blackmailing A2.

For example, if instead of "I will hurt you if you don't give me money" A1 says "I've just discovered that A3 is planning to hurt you! I can prevent it by taking certain steps on your behalf, but those steps are expensive, and I have other commitments for my money that are more important to me than averting your pain. But if you give me the money, I can take those steps, and you won't get hurt," A2 may not recognize this as blackmail, in which case A1 can finesse A2's policy.

Of course, any reasonably sophisticated human will recognize that as likely blackmail, so a kind of social arms race ensues. Real-world blackmail attempts can be very subtle. (ETA: That extortion is illegal also contributes to this, of course... subtle extortion attempts can reduce A1's legal liability, even when they don't actually fool anyone.)

(Indeed, in some cases A1 can fool themselves, which brings into question whether it's still blackmail. IMHO, the best way to think about cases like that is to stop treating people fooling themselves as unified agents, but that's way off-topic.)

Comment author: shokwave 11 December 2010 04:31:50PM *  1 point [-]

Hmm. Bear with me.

Consider this decision tree

The decision that maximises your gain, as Red, is for Blue to pick 5, -10. The decision that minimises your loss, as Blue, is to pick 5, -10. And so it is that extortionism is rediscovered by all agents like Red. But Blues sometimes pick -1, -20, more often than ‘some people are purely irrational’ would predict.

In a society that deeply understood game theory, they could recognise that this extortion scenario would be iterated. In an iterated series of extortion games, if a lunatic Blue completely ignored their own disutility to punish Red for extorting them, they could reduce Red’s utility below 0. At this point, Red would prefer not to have extorted the lunatic. If a Blue can convincingly signal their lunacy to potential extorter Reds, Reds will predictably leave said lunatic Blues alone. Therefore, the true decision tree, for both agents, looks like this.

Note that this is restricted to the long view of an iterated extortion game; understanding that precommitting to lunacy would have stopped Red will not magically undo the current situation, so if Blue knew that this case was unlikely to be part of an iterated series, they would simply capitulate. (Incidentally, this is why I do not extort people. Anyone worth extorting would be smart enough to figure out this line of reasoning, capitulate to me, and then expend some effort ensuring I was unable to iterate the series. Such as having me arrested or murdered.)

Whence the moral condemnation of extortion, then? Our morals and intuitions don’t exactly suggest all these options to us. Well, to a naive outsider looking in, the outcome of a game-theory society is that they usually refuse to cooperate with extortionists, they occasionally capitulate, and extortionists are hated. The naive outsider doesn’t see that extortionists are hated for trying to instantiate iterated negative-sum games, or the reasons why refusal is common and capitulation rare. All they see is that the game-theory society doesn’t have a problem with extortion. So they imitate the game-theory society’s actions, not reasons, get diminished but still impressive benefits, and expend far less energy working out these problems and far more energy copulating (a not entirely undesirable path).

There may not have been a game-theory society. In that case, producing animals that have irrationally have rough concepts of extortion and act in roughly the right way was an earlier solution than producing game-theory animals.

Extortion, then, is the label for events that roughly match the characteristics of negative-outcome trade from the perspective of the victim - that is, the victim would prefer the situation not to have taken place to either capitulating or refusing. The event only has to pattern-match this situation for people to think extortion.

If you would prefer the situation to not have happened, and you have reason to believe that there is a long enough iteration, and you can cause disutility for the agent that had a choice in causing the situation to happen, then suffering disutility to change the causal agent’s decision is the rational choice. Our intuition only shows us the individual slice, though, and some individual slices look like patently bizarre behaviour.

Of course, we would immensely prefer that this entire scenario of iterated extortion games resulting in disutility for both sides is simply a counterfactual in the mind of the potential extortionist.

Comment author: JamesAndrix 10 December 2010 06:30:52PM 1 point [-]

Why is the "default" special here? If bargaining or blackmail did happen, we know that "default" is impossible.

This seems like you're setting up a game scenario, and then telling us at the beginning of the second turn that we should ignore what the other player did on the first turn.

Initiating negotiations is an in-game choice. The blackmailer's choice imposes a cost on me, me pre-committing makes the blackmailer's payoff 0-0.

And in most cases it takes extra work to attempt blackmail. On the other hand, You should be open to blackmail by your enemies, since it would benefit them to harm you.

Comment author: cousin_it 10 December 2010 03:27:30PM *  1 point [-]

Couldn't parse what FAWS's comment was saying, but the following may be similar in spirit:

Assume we have a two-player game with a unique Nash equilibrium. Now allow player 1 to make a credible threat ("if you do X then I'll do Y"), and then player 2 must react to it "optimally" without making any prior precommitments of his own. If the resulting play makes player 1 better off and player 2 worse off than they would've fared under the Nash equilibrium, then let's say player 1 is being "too pushy".

For example, take the PD and additionally give player 1 the option to trigger a nuke and kill everyone. The resulting game still has a unique Nash equilibrium (defect,defect), but player 1 can threaten to trigger the nuke unless player 2 cooperates, then defect and get higher utility. Looks legit?

No idea what to say about games with multiple equilibria (e.g. bargaining games) or about the decision-theoretic context, which is way more tricky.

Comment author: JGWeissman 10 December 2010 01:17:33AM 1 point [-]

It seems that if we could explain what we mean by a "default" action, the explanation of blackmail in terms of the "default" action would work.

Comment author: Vladimir_Nesov 10 December 2010 01:18:32AM *  0 points [-]

Indeed, but since it's not clear what that means, "blackmail" remains a mystery as well.

Comment author: Armok_GoB 17 December 2010 11:25:29PM 0 points [-]

Does this mean we shouldn't care about the law? Regardless of the exact definition, I'm pretty sure that building a prison and putting me in it is not the default.

It still seems like a bad idea to not follow the law, even if that'd lead to a society that doesn't have laws, so where does my thinking go wrong? My current heuristic of "don't submit to blackmail, except when done by this or this or this organization" seems very ugly, especially as it isn't clear what organizations should be in that whitelist.

Comment author: Perplexed 10 December 2010 01:45:41AM 0 points [-]

I made this comment on the thread where this question was originally raised.

Comment author: nshepperd 10 December 2010 02:37:58AM -1 points [-]

The decision to blackmail is made in the hope to gain something by you responding. If the blackmailer knows that you won't respond, their expected gain is drastically lowered, and they will probably decide to do something else instead ("something that takes less effort"). I suggest that maybe this is what we could call the "default". So if your expected utility is lower in the "default" case it's bargaining, otherwise, blackmail.

However this definition would call an offer "bargaining" if the "default" was to do something else (more) harmful to you, which I'm unsure is correct. But it does make "don't respond to blackmail" rather tautologically true.

Comment author: [deleted] 13 June 2014 06:08:44AM -2 points [-]

"Blackmail consists of two things, each indisputably legal on their own; yet, when combined in a single act, the result is considered a crime. What are the two things? First, there is either a threat or an offer. In the former case, it is, typically, to publicise on embarrassing secret; in the latter, it is to remain silent about this infonnation. Second, there is a demand or a request for funds or other valuable considerations. When put together, there is a threat that, unless paid off, the secret will be told." -Walter Block