It's an old book, I know, and one that many of us have already read. But if you haven't, you should.
If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.
Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.
Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.
And after we split the prize and cash our checks I learn that you broke the radio on purpose.
Schelling's book walks you through numerous conflict situations where an unintuitive and often self-limiting move helps you win, slowly building up to the topic of nuclear deterrence between the US and the Soviets. And it's not idle speculation either: the author worked at the White House at the dawn of the Cold War and his theories eventually found wide military application in deterrence and arms control. Here's a selection of quotes to give you a flavor: the whole book is like this, except interspersed with game theory math.
The use of a professional collecting agency by a business firm for the collection of debts is a means of achieving unilateral rather than bilateral communication with its debtors and of being therefore unavailable to hear pleas or threats from the debtors.
A sufficiently severe and certain penalty on the payment of blackmail can protect a potential victim.
One may have to pay the bribed voter if the election is won, not on how he voted.
I can block your car in the road by placing my car in your way; my deterrent threat is passive, the decision to collide is up to you. If you, however, find me in your way and threaten to collide unless I move, you enjoy no such advantage: the decision to collide is still yours, and I enjoy deterrence. You have to arrange to have to collide unless I move, and that is a degree more complicated.
We have learned that the threat of massive destruction may deter an enemy only if there is a corresponding implicit promise of nondestruction in the event he complies, so that we must consider whether too great a capacity to strike him by surprise may induce him to strike first to avoid being disarmed by a first strike from us.
Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.
I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.
Sometime ago in my wild and reckless youth that hopefully isn't over yet, a certain ex-girlfriend took to harassing me with suicide threats. (So making her stay alive was presumably our common interest in this variable-sum game.) As soon as I got around to looking at the situation through Schelling goggles, it became clear that ignoring the threats just leads to escalation. The correct solution was making myself unavailable for threats. Blacklist the phone number, block the email, spend a lot of time out of home. If any messages get through, pretend I didn't receive them anyway. It worked. It felt kinda bad, but it worked.
I was rather intemperate, and on a different day maybe I would have been less so; or maybe I wouldn't. I am sorry that I offended Wei Dai.
But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens. This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.
You compare the problem to Eliezer's one of TORTURE vs SPECKS, but there is an important difference between them. TORTURE vs SPECKS is fiction, while Wei Dai spoke of an actual juncture in history in living memory, and actions that actually could have been taken.
What is the TORTURE vs SPECKS problem? The formulation of the problem is at that link, but what sort of thing is this problem? Given the followup posting the very next day, it seems likely to me that the intention was to manifest people's reactions to the problem. Perhaps it is also a touchstone, to see who has and who has not learned the material on which it stands. What it is not is a genuine problem which anyone needs to solve as anything but a thought experiment. TORTURE vs SPECKS is not going to happen. Other tradeoffs between great evil to one and small evils to many do happen; this one never will. While 50 years of torture is, regrettably, conceivably possible here and now in the real world, and may be happening to someone, somewhere, right now, there is no possibility of 3^^^3 specks. Why 3^^^3? Because that is intended to be a number large enough to produce the desired conclusion. Anyone whose objection is that it isn't a big enough number, besides manifesting a poor grasp of its magnitude, can simply add another uparrow. The problem is a fictional one, and as such exhibits the reverse meta-causality characteristic of fiction: 3^^^3 is in the problem because the point of the problem is for the solution to be TORTURE; that TORTURE is the solution is not caused by an actual possibility of 3^^^3 specks.
In another posting a year later, Eliezer speaks of ethical rules of the sort that you just don't break, as safety rails on a cliff he didn't see. This does not sit well with the TORTURE vs SPECKS material, but it doesn't have to: TORTURE vs SPECKS is fiction and the later posting is about real (though unspecified) actions.
So, the Cold War. Wei Dai would have the US after WWII threatening to nuke any country attempting to develop or test nuclear weapons. To the scenario of later discovering that (for example) the UK has a well-developed covert nuclear program, he responds:
It should, should it? And that, in Wei's mind, is adequate justification for pressing the button to kill millions of people for not doing what he told them to do. Is this rationality, or the politics of two-year-olds with nukes?
I seem to be getting intemperate again.
It's a poor sort of rationality that only works against people rational enough to lose. Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats. And so on. How's TDT/UDT with self-modifying agents modelling themselves and each other coming along?
This is fantasy masquerading as rationality. I stand by this that I said back then:
To make these threats, you must be willing to actually do what you have said you will do if your enemy does not surrender. The moment you think "but rationally he has to surrender so I won't have to do this", you are making an excuse for yourself to not carry it out. Whatever belief you can muster that you would will evaporate like dew in the desert when the time comes.
How are you going to launch those nukes, anyway?
Using the word "intemperate" in this way is a remarkable dodge. Wei Dai's comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai's statements. The tone of my response was deliberate and quite restrained relative to how I felt.
... (read more)