Why is the "default" special here?
Because in a blackmail, I do not wish the trade to happen at all. Let the "default" outcome for a trade T be one where the trade doesn't happen. Assume that my partner (the Baron) gets to decide whether T happens or not.
If T is a blackmail, then every option is worse than not-T. So, if I can commit to ensuring that T is also negative for the Baron, then the Baron won't let T happen. This gives a definition for blackmail: a trade T where every option is worse than not-T, but where I can commit to actions that ensure that T is negative for the person that decides whether T happens or not.
Let's contrast this with another trade T, with no blackmail elements to it, where I am a monopolist or monopsonist. It is still to my advantage to credibly commit to rejecting everything if I don't get 99% of the profit. However, I am limited by the fact that I want the trade to happen; I can't commit to any option that is actually harmful to the Baron. He will trade with me as long as he doesn't lose; his 'default' ensures that I have to give him something.
Finally, most trades are not monopolist or monopsonist. In this case, it is not to my advantage to precommit to taking more than "my fair (market) share" of the profit, as that will cause the trade to fail; the Baron's default is higher (he can trade with others) so I have to offer him at least that.
Now, I don't want to go down the rabbit hole of dueling pre-commitments, or the proper decision-theoretic way of resolving the issue (blackmailing someone or precommiting to avoid blackmail are very similar processes). But it does show why you would want to precommit to a particular action in blackmail situations, but not in others: you do not control if the trade happens, and blackmails are trades that you do not want to see happen. You can call not-trading the 'default' if you wish, but the salient fact is that it is better for you, not that it is default.
Because in a blackmail, I do not wish the trade to happen at all.
Something has to happen, and you must choose from the options you're dealt. Maybe I don't wish to pay for my Internet connection, and would rather have the Flying Spaghetti Monster provide it to me free of charge, and also grant me $1000 as a bonus? This seems to qualify as not wishing I had to choose a provider at all. But in reality, I have to choose, and FSM is not available as an option, just as not being blackmailed is not available as an option (by assumption; the agent doesn't need...
Keep in mind: Controlling Constant Programs, Notion of Preference in Ambient Control.
There is a reasonable game-theoretic heuristic, "don't respond to blackmail" or "don't negotiate with terrorists". But what is actually meant by the word "blackmail" here? Does it have a place as a fundamental decision-theoretic concept, or is it merely an affective category, a class of situations activating a certain psychological adaptation that expresses disapproval of certain decisions and on the net protects (benefits) you, like those adaptation that respond to "being rude" or "offense"?
We, as humans, have a concept of "default", "do nothing strategy". The other plans can be compared to the moral value of the default. Doing harm would be something worse than the default, doing good something better than the default.
Blackmail is then a situation where by decision of another agent ("blackmailer"), you are presented with two options, both of which are harmful to you (worse than the default), and one of which is better for the blackmailer. The alternative (if the blackmailer decides not to blackmail) is the default.
Compare this with the same scenario, but with the "default" action of the other agent being worse for you than the given options. This would be called normal bargaining, as in trade, where both parties benefit from exchange of goods, but to a different extent depending on which cost is set.
Why is the "default" special here? If bargaining or blackmail did happen, we know that "default" is impossible. How can we tell two situations apart then, from their payoffs (or models of uncertainty about the outcomes) alone? It's necessary to tell these situations apart to manage not responding to threats, but at the same time cooperating in trade (instead of making things as bad as you can for the trade partner, no matter what it costs you). Otherwise, abstaining from doing harm looks exactly like doing good. A charitable gift of not blowing up your car and so on.
My hypothesis is that "blackmail" is what the suggestion of your mind to not cooperate feels like from the inside, the answer to a difficult problem computed by cognitive algorithms you don't understand, and not a simple property of the decision problem itself. By saying "don't respond to blackmail", you are pushing most of the hard work into intuitive categorization of decision problems into "blackmail" and "trade", with only correct interpretation of the results of that categorization left as an explicit exercise.
(A possible direction for formalizing these concepts involves introducing some kind of notion of resources, maybe amount of control, and instrumental vs. terminal spending, so that the "default" corresponds to less instrumental spending of controlled resources, but I don't see it clearly.)
(Let's keep on topic and not refer to powerful AIs or FAI in this thread, only discuss the concept of blackmail in itself, in decision-theoretic context.)