It's an old book, I know, and one that many of us have already read. But if you haven't, you should.

If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.

Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.

Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.

And after we split the prize and cash our checks I learn that you broke the radio on purpose.

Schelling's book walks you through numerous conflict situations where an unintuitive and often self-limiting move helps you win, slowly building up to the topic of nuclear deterrence between the US and the Soviets. And it's not idle speculation either: the author worked at the White House at the dawn of the Cold War and his theories eventually found wide military application in deterrence and arms control. Here's a selection of quotes to give you a flavor: the whole book is like this, except interspersed with game theory math.

The use of a professional collecting agency by a business firm for the collection of debts is a means of achieving unilateral rather than bilateral communication with its debtors and of being therefore unavailable to hear pleas or threats from the debtors.

A sufficiently severe and certain penalty on the payment of blackmail can protect a potential victim.

One may have to pay the bribed voter if the election is won, not on how he voted.

I can block your car in the road by placing my car in your way; my deterrent threat is passive, the decision to collide is up to you. If you, however, find me in your way and threaten to collide unless I move, you enjoy no such advantage: the decision to collide is still yours, and I enjoy deterrence. You have to arrange to have to collide unless I move, and that is a degree more complicated.

We have learned that the threat of massive destruction may deter an enemy only if there is a corresponding implicit promise of nondestruction in the event he complies, so that we must consider whether too great a capacity to strike him by surprise may induce him to strike first to avoid being disarmed by a first strike from us.

Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's  cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.

Sometime ago in my wild and reckless youth that hopefully isn't over yet, a certain ex-girlfriend took to harassing me with suicide threats. (So making her stay alive was presumably our common interest in this variable-sum game.) As soon as I got around to looking at the situation through Schelling goggles, it became clear that ignoring the threats just leads to escalation. The correct solution was making myself unavailable for threats. Blacklist the phone number, block the email, spend a lot of time out of home. If any messages get through, pretend I didn't receive them anyway. It worked. It felt kinda bad, but it worked.

Hopefully you can also find something that works.
New Comment
154 comments, sorted by Click to highlight new comments since: Today at 2:43 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A good reference, but it's worth remembering that if I tried the radio sabotage trick in real life, either I'd accidentally break the transmit capability as well as receive, or I'd be there until the deadline had come and gone happily blabbering about how I'm on the hill that looks like a pointy hat, while you were 20 miles away on a different hill that also looked like a pointy hat, cursing me, my radio and my inadequate directions.

In other words, like most things that are counterintuitive, these findings are counterintuitive precisely because their applicability in real life is the exception rather than the rule; by all means let's recognize the exceptions, but without forgetting what they are.

In the post I tried pretty hard to show the applicability of the techniques to real life, and so did Schelling. Apparently we haven't succeeded. Maybe some more quotes will tip the scales? Something of a more general nature, not ad hoc trickery?

If one is committed to punish a certain type of behavior when it reaches certain limits, but the limits are not carefully and objectively defined, the party threatened will realize when the time comes to decide whether the threat must be enforced or not, his interest and that of the threatening party will coincide in an attempt to avoid the mutually unpleasant consequences.

Or what do you say to this:

Among the legal privileges of corporations, two that are mentioned in textbooks are the right to sue and the "right" to be sued. Who wants to be sued! But the right to be sued is the power to make a promise: to borrow money, to enter a contract, to do business with someone who might be damaged. If suit does arise, the "right" seems a liability in retrospect; beforehand it was a prerequisite to doing business.

Or this:

If each party agrees to send a million dollars to the Red Cross on condition the other does, each may

... (read more)
6rwallace15y
Thanks, those are better examples.

In other words, like most things that are counterintuitive, these findings are counterintuitive precisely because their applicability in real life is the exception rather than the rule; by all means let's recognize the exceptions, but without forgetting what they are.

The examples in the original post are not exceptions. It just takes a while to recognise them under the veneers of social norms and instinctual behaviours.

The broken radio, for example, is exactly what I see when attempting to communicate with those who would present themselves as higher status. Blatant stupidity (broken receiver) is often a signal, not a weakness. (And I can incorporate this understanding when dealing with said people, which I find incredibly useful.)

9rwallace15y
Good point, though the results of this are frequently as disastrous as in my observation about the broken radio trick. (Much of Dilbert can be seen as examples thereof.)
1wedrifid15y
I think you're right. It does seem to me that in the current environment the 'signal status though incomprehension' gives real losses to people rather frequently, as is the case with PHBs. I wonder though, how much my observations of the phenomenon are biased by selection. Perhaps by am far more likely to notice this sort of silliness when it is quite obvious that the signaller is going against his own self interest. That is certainly when it gets on my nerves the most!
0[anonymous]12y
3wedrifid12y
Not quite. There is an element of cooperation involved but the payoff structure is qualitatively different, as is the timing. If you defect in the PD then the other person is better of defecting as well. If you break your radio the other guy is best off not breaking his. The PD is simultaneous while the radio is not. (So if you break your radio the other guy is able to hunt you down and bitch slap you.)
0[anonymous]12y
Ah, yeah. Somehow only the notion that "if you don't cooperate, something undesirable will happen (to someone)" remained salient in my mind.

Slightly related (talking about game theory), one of the most bizarre things was the 1994 football/soccer match between Grenada and Barbados, in which both teams tried to win the game by deliberately score against themselves (and the opponents trying to prevent that).

A search through the comments on this article turns up exactly zero instances of the term "Vietnam".

Taking a hard look at what Schelling tried when faced with the real-world 'game' in Vietnam is enlightening as to the ups and downs of actually putting his theories -- or game theory in general -- into practice.

Fred Kaplan's piece in Slate from when Schelling won the Nobel is a good start:

http://www.slate.com/id/2127862/

9Paul Crowley13y
Terrible article in many ways - this is a very silly thing to say: BTW, after a conversation with Eliezer at the weekend, I have just asked my employers to buy this book.
2Richard_Kennaway13y
What do your employers do, that the book is relevant there? What they (assuming the CV on your web site is up to date) say about themselves on their web site is curiously unspecific.
9Paul Crowley13y
I work for a computer consultancy; we do all sorts of things. The book is relevant because while we generally enjoy excellent relations with all our clients, it can sometimes happen that they muck us about, for example on rates.
7Richard_Kennaway15y
Thanks for that extra light. I have the 1980 edition of "The strategy of conflict" from the library at the moment. It's a reissue of the 1960 edition with an added preface by Schelling. Despite the Slate article closing by saying "Tom Schelling didn't write much about war after that [the Vietnam War]. He'd learned the limitations of his craft.", in his 1980 preface he judges the book's content as still "mostly all right".

Having read only a portion of the book so far (thanks for the pdf cousin_it and Alicorn!), I've noticed that the techniques and strategies Schelling goes over are applicable to my struggles with akrasia.

I'm sure it's been said before on lesswrong that when there's a conflict between immediate and delayed gratification, you can think of yourself as two agents: one rational, one emotional; one thinking in the present, one able to plan future moves and regret mistakes. These agents are obviously having a conflict, and I often find Rational Me (RM) losing ground to Irrational Me (IM) in situations that this book describes perfectly.

Say RM wants to work, and IM wants to watch TV online. If RM settles on "some" TV, IM can exploit the vagueness and non-natural settling point, and watch an entire season of a show. The two most stable negotiating points seem to be "no TV" and "unlimited amounts of TV".

Other techniques people use to avoid akrasia map really well with Schelling's conflict strategies, like breaking up commitments into small chunks ("fifteen minutes of work, then I can have a small reward") and forming a commitment with a third party to force your hand (like using stickk.com or working with friends or classmates).

So what happens in the broken radio example if both the persons have already read schellings book? Nobody gets the prize? I mean how does such a situation is resolved? If everybody perfects the art of rationality, who wins? and who loses?

8cousin_it15y
If it's common knowledge that both have read Schelling's book, the game is isomorphic to Chicken), which has been extensively studied.
1ajayjetti15y
so rationality doesn't always mean "win-win" ? In a chicken situation, the best thing for "both" the persons is to remain alive, which can be done by one of them (or both) "swerving", right? There is a good chance that one of them is called chicken.

Neither actual human rationality nor its best available game-theoretic formalizations (today) necessarily lead to win-win.

7Technologos15y
Indeed, the difference between Winning and "win-win" is important. Rationality wouldn't be much of a martial art if we limited the acceptable results to those in which all parties win.
2Linch10y
Hi! First post here. You might be interested in knowing that not only is the broken radio example isomorphic to "Chicken," but there's a real-life solution to the Chicken game that is very close to "destroying your receiver." That is, you can set up a "committment" that you will, in fact, not swerve. Of course, standard game theory tells us that this is not a credible threat (since dying is bad). Thus, you must make your commitment binding, eg., by ripping out the steering wheel.
1TheOtherDave10y
And it helps to do it first. Being the second player to rip out the steering wheel is a whole other matter.
2JJ10DMAN12y
The example was just to make an illustration, and I wouldn't read into it too much. It has a lot of assumptions like, "I would rather sit around doing absolutely nothing than take stroll in the wilderness," and, "I have no possible landing position I can claim in order to make my preferred meeting point seem like a fair compromise, and therefore I must break my radio."
0[anonymous]15y
I should've asked you to work it out for yourself, 'cause if you can't do that you really have no business commenting here, but... okay. If it's common knowledge that both have read Schelling's book, the game has a Nash equilibrium in mixed strategies#Mixed_strategy). You break your radio with a certain probability and your buddy does the same.

*adds book to list of books to read*

This sounds... ruthless, but in an extremely cool way.

Shelling was actually the less ruthless of the pioneers of game theory. The other pioneer was Von Neumann who advocated a unilateral nuclear attack on the USSR before they developed their own nuclear weapons.

By contrast, Shelling invented the red hotline between the US and USSR, since more communication meant less chance of WW3.

Basically he was about ruthlessness for the good of humanity.

Shelling was actually the less ruthless of the pioneers of game theory. The other pioneer was Von Neumann who advocated a unilateral nuclear attack on the USSR before they developed their own nuclear weapons.

One thing I don't understand, why didn't the US announce at the end of World War II that it will nuke any country that attempts to develop a nuclear weapon or conducts a nuclear bomb test? If it had done that, then there would have been no need to actually nuke anyone. Was game theory invented too late?

You are the President of the US. You make this announcement. Two years later, your spies tell you that the UK has a well-advanced nuclear bomb research programme. The world is, nevertheless, as peaceful on the whole as in fact it was in the real timeline.

Do you nuke London?

6Wei Dai15y
I'd give the following announcement: "People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date]." Well, I'd go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.

I can think, straight away, of four or five reason why this would have been very much the wrong thing to do.

  • You make an enemy of your biggest allies. Nukes or no, the US has never been more powerful than the rest of the world put together.
  • You don't react to coming out of one Cold War by initiating another.
  • This strategy is pointless unless you plan to follow through. The regime that laid down that threat would either be strung up when they launched, or voted straight out when they didn't.
  • Mutually assured destruction was what stopped nuclear war happening. Setting one country up as the Guardian of the Nukes is stupid, even if you are that country. I'm not a yank, but I believe this sort of idea is pretty big in the constitution.
  • Attacking London is a shortcut to getting a pounding. This one's just conjecture.

Basically he was about ruthlessness for the good of humanity.

Yeah I think the clue is in there. Better to be about the good of humanity, and ruthless if that's what's called for. Setting yourself up as 'the guy who has the balls to make the tough decisions' usually denotes you as a nutjob. Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Survivorship bias. There were some very near misses (Cuban Missile Crisis, Stanislav Petrov, etc.), and it seems reasonable to conclude that a substantial fraction of the Everett branches that came out of our 1946 included a global thermonuclear war.

I'm not willing to conclude that von Neumann was right, but the fact that we avoided nuclear war isn't clear proof he was wrong.

1Vladimir_Nesov15y
If the allies are rational, they should agree that it's in their interest to establish this strategy. The enemy of everyone is the all-out nuclear war.

This strikes me as a variant of the ultimatum game. The allies would have to accept a large asymmetry of power. If even one of them rejects the ultimatum you're stuck with the prospect of giving up your strategy (having burned most or all of your political capital with other nations), or committing mass murder.

When you add in the inability of governments to make binding commitments, this doesn't strike me as a viable strategy.

8Vladimir_Nesov15y
Links in the Markdown syntax are written like this:
9Larks13y
The UK bomb was developed with the express purpose of providing independance from the US. If the US could keep the USSR nuke-free there'd be less need for a UK bomb. Also, it's possible that the US could tone down its anti-imperialist rhetoric/covert funding so as to not threaten the Empire.
4Kaj_Sotala15y
I think that, by the time you've reached the point where you're about to kill millions for the sake of the greater good, you'd do well to consider all the ethical injunctions this violated. (Especially given all the different ways this could go wrong that UnholySmoke could come up off the top of his head.)

Kaj, I was discussing a hypothetical nuclear strategy. We can't discuss any such strategy without involving the possibility of killing millions. Do the ethical injunctions imply that such discussions shouldn't occur?

Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles. Does MAD also violate ethical injunctions? Should it also not have been discussed? (How many different ways could things have gone wrong with MAD?)

2Kaj_Sotala15y
Of course not. I'm not saying the strategy shouldn't be discussed, I'm saying that you seem to be expressing greater certainty of your proposed approach being correct than would be warranted. (I wouldn't object to people discussing math, but I would object if somebody thought 2 + 2 = 5.)
0handoflixue13y
And the world as we know it is still around because Stanislav Petrov ignored that order and insisted the US couldn't possibly be stupid enough to actually launch that sort of attack. I would pray that the US operators were equally sensible, but maybe they just got lucky and never had a technical glitch threaten the existence of humanity.
0Richard_Kennaway15y
The entire civilised world (which at this point does not include anyone who is still a member of the US government) is in uproar. Your attempts at secret diplomacy are leaked immediately. The people of the UK make tea in your general direction. Protesters march on the White House. When do you push the button, and how will you keep order in your own country afterwards? What I'm really getting at here is that your bland willingness to murder millions of non-combatants of a friendly power in peacetime because they do not accede to your empire-building unfits you for inclusion in the human race. Also, that it's easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.

So says the man from his comfy perch in an Everett branch that survived the cold war.

What I'm really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.

Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.

9shokwave13y
I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race - and he's right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race

You do realize that the point of my proposed strategy was to prevent the destruction of Earth (from a potential nuclear war between the US and USSR), and not "empire building"?

I don't understand why Richard and you consider MAD acceptable, but my proposal beyond the pale. Both of you use the words "friendly power in peacetime", which must be relevant somehow but I don't see how. Why would it be ok (i.e., fit for inclusion in the human race) to commit to murdering millions of non-combatants of an enemy power in wartime in order to prevent nuclear war, but not ok to commit to murdering millions of non-combatants of a friendly power in peacetime in service of the same goal?

A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

I also took Richard's comment personally (he did say "your bland willingness", emphasis added), which is probably why I didn't respond to it.

The issue seems to be that nuking a friendly power in peacetime feels to people pretty much like a railroad problem where you need to shove the fat person. In this particular case, since it isn't a hypothetical, the situation has been made all the more complicated by actual discussion of the historical and current geopolitics surrounding the situation (which essentially amounts to trying to find a clever solution to a train problem or arguing that the fat person wouldn't weigh enough.) The reaction is against your apparent strong consequentialism along with the fact that your strategy wouldn't actually work given the geopolitical situation. If one had an explicitly hypothetical geopolitical situation where this would work and then see how they respond it might be interesting.

-1shokwave13y
Well, this is evidence against using second-person pronouns to avoid "he/she".
1JoshuaZ13y
He could easily have said "bland willingness to" rather than "your bland willingness" so that doesn't seem to be an example where a pronoun is necessary.
0shokwave13y
No, it's an example where using "you" has caused someone to take something personally. Given that the "he/she" problem is that some people take it personally, I haven't solved the problem, I've just shifted it onto a different group of people.

I was commenting on what he said, not guessing at his beliefs.

I don't think you've made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it's not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn't render me unfit for existence.

Anyone willing to deploy a nuclear weapon has a "bland willingness to slaughter". Anyone employing MAD has a "bland willingness to destroy the entire human race".

I suspect that you have no compelling proof that Wei Dai's hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a "bland willingness to torture someone for 50 years" and sentence him to exclusion from humanity?

2shokwave13y
What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway's comment as "comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race" when what he was saying was "horrendous act unfits you for inclusion in the human race".
-5Richard_Kennaway13y
4Vladimir_Nesov15y
Fail.
2Viliam_Bur13y
A model of reality, which assumes that an opponent must be rational, is an incorrect model. At best, it is a good approximation that could luckily return a correct answer in some situations. I think this is a frequent bias for smart people -- assuming that (1) my reasoning is flawless, and (2) my opponent is on the same rationality level as me, therefore (3) my opponent must have the same model of situation as me, therefore (4) if I rationally predict that it is best for my opponent to do X, my opponent will really do X. And then my opponent does non-X, and I am like: WTF?!
0[anonymous]15y
Richard, I'm with Nesov on this one. Don't attack the person making the argument.
1BronecianFlyreme10y
Interestingly, it seems to me like the most convenient solution to this problem would be to find some way to make yourself incapable of not nuking anyone who built I nuke. I don't think it's really feasible, but I thought it was worth mentioning just because it matches the article so closely
-1Richard_Kennaway10y
I'm sure all extortionists would find it very convenient to be able to say to their victims while breaking their legs, "It's you that's doing this, not me!" And to have the courts accept that as a valid defence, and jail the victim for committing assault on themselves. But the fact is, we cannot conduct brain surgery on ourselves to excise our responsibility. Is it an ability to be desired?
3Lumifer10y
You probably thought you were kidding. Not
9eirenicon15y
At the end of WWII, the US's nuclear arsenal was still small and limited. The declaration of such a threat would have made it worth the risk for the USSR to dramatically ramp up their nuclear weapons research, which had been ongoing since 1942. The Soviets tested their first nuke in 1949; at that point or any time earlier, it would have been too late for the US to follow through. They would've had to drop the Marshall Plan and risk starting another "hot war". With their European allies, especially the UK, still struggling economically, the outcome would have been far from assured.
6CronoDAS15y
As a practical matter, this would not have been possible. At the end of World War II, the U.S. didn't have enough nuclear weapons to do much more than threaten to blow up a city or two. Furthermore, intercontinental ballistic missiles didn't exist yet; the only way to get a nuclear bomb to its target was to put it in an airplane and hope the airplane doesn't get shot down before it gets to its destination.

According to this book, in May 1949 (months before the Soviet's first bomb test), the US had 133 nuclear bombs and a plan (in case of war) to bomb 70 Soviet cities, but concluded that this was probably insufficient to "bring about capitulation". The book also mentions that the US panicked and speeded up the production of nuclear bombs after the Soviet bomb test, so if it had done that earlier, perhaps it would have had enough bombs to deter the Soviets from developing them.

Also, according to this article, the idea of using nuclear weapons to deter the development/testing of fusion weapons was actually proposed, by I I Rabi and Enrico Fermi:

They believed that any nation that violated such a prohibition would have to test a prototype weapon; this would be detected by the US and retaliation using the world’s largest stock of atomic bombs should follow. Their proposal gained no traction.

-1thomblake15y
But at the end of the war, the US had developed cybernetic anti-aircraft guns to fight the Pacific War, but the Russians did not have them. They had little chance of shooting down our planes using manual sighting.
8irarseil12y
I think you should be aware that lesswrong is read in countries other than the USA, and writing about "our planes" in a forum where not everyone is American to mean "American planes" can lead to misunderstandings or can discourage others from taking part in the conversation.
5cousin_it15y
How would the US detect attempts to develop nuclear weapons before any tests took place? Should they have nuked the USSR on a well-founded suspicion?
8Wei Dai15y
I think from a rational perspective, the answer must be yes. Under this hypothetical policy, if the USSR didn't want to be nuked, then it would have done whatever was necessary to dispel the US's suspicion (which of course it would have voiced first). Do you really prefer the alternative that actually happened? That is, allow the USSR and many other countries to develop nuclear weapons and then depend on MAD and luck to prevent world destruction? Even if you personally do prefer this, it's hard to see how that was a rational choice for the US. BTW, please stop editing so much! You're making me waste all my good retorts. :)

It seems equally rational for the US to have renounced its own nuclear program, thereby rendering it immune to the nuclear attacks of other nations. That is what you're saying, right? The only way for the USSR to be immune from nuclear attack would be to prove to the US that it didn't have a program. Ergo, the US could be immune to nuclear attack if it proved to the USSR that it didn't have a program. Of course, that wouldn't ever deter the nuclear power from nuking the non-nuclear power. If the US prevented the USSR from developing nukes, it could hang the threat of nuclear war over them for as long as it liked in order to get what it wanted. Developing nuclear weapons was the only option the USSR had if it wanted to preserve its sovereignty. Therefore, threatening to nuke the USSR if it developed nukes would guarantee that you would nuke it if they didn't (i.e. use the nuke threat in every scenario, because why not?), which would force the USSR to develop nukes. Expecting the USSR, a country every inch as nationalistic as the US, a country that just won a war against far worse odds than the US ever faced, to bend the knee is simply unrealistic.

Also, what would the long-term outcome be? Either the US rules the world through fear, or it nukes every country that ever inches toward nuclear weaponry and turns the planet into a smoky craphole. I'll take MAD any day; despite its obvious risks, it proved pretty stable.

2Wei Dai15y
I think there is an equilibrium where the US promises not to use the threat of nukes for anything other than enforcing the no-nuclear-development policy and for obvious cases of self-defense, and it keeps this promise because to not do so would be to force other countries to start developing nukes. Also, I note that many countries do not have nukes today nor enjoy protection by a nuclear power, and the US does not use the threat of nuclear war against them in every scenario.

I think that proposed equilibrium would have been extremely unlikely under circumstances where the US (a) had abandoned their pre-war isolationist policies and (b) were about to embark on a mission of bending other nations, often through military force, to their will. Nukes had just been used to end a war with Japan. Why wouldn't the US use them to end the Korean war, for example? Or even to pre-empt it? Or to pre-empt any other conflict it had an interest in? The US acted incredibly aggressively when a single misstep could have sent Soviet missiles in their direction. How aggressive might it have been if there was no such danger? I think you underestimate how much of a show stopper nuclear weapons were in the 40s and 50s. There was no international terrorism or domestic activism that could exact punitive measures on those who threatened to use or used nukes.

Even though the cold war is long over, I am still disturbed by how many nuclear weapons there are in the world. Even so, I would much rather live in this climate than one in which only a single nation - a nation with a long history of interfering with other sovereign countries, a nation that is currently engaged in two wars of aggression - was the only nuclear power around.

6Vladimir_Nesov15y
Given that there is a nontrivial chance that the policy won't be implemented reliably, and partially because of that the other side will fail to fear it properly, the expected utility of trying to implement this policy seems hideously negative (that is, there is a good chance a city will be nuked as a result, after which the policy crumbles under the public pressure, and after that everyone develops the technology).
1Wei Dai15y
Ok, granted, but was the expected utility less than allowing everyone to develop nuclear weapons and then using a policy of MAD? Clearly MAD has a much lower utility if the policy failed, so the only way it could have been superior is if it was considered much more reliable. But why should that be the case? It seems to me that MAD is not very reliable at all because the chance of error in launch detection is high (as illustrated by historical incidents) and the time to react is much shorter.
2Vladimir_Nesov15y
The part you didn't quote addressed that: once this policy doesn't work out as planned, it crumbles and the development of nukes by everyone interested goes on as before. It isn't an alternative to MAD, because it won't actually work.
9Wei Dai15y
Well, you said that it had a "good chance" of failing. I see your point if by "good chance" you meant probability close to 1. But if "good chance" is more like 50%, then it would still have been worth it. Let's say MAD had a 10% chance of failing: * EU(MAD) = .1 * U(world destruction) * EU(NH) = .5 U(one city destroyed) + .05 U(world destruction) Then EU(MAD) < EU(NH) if U(world destruction) < 10 U(one city destroyed).
4UnholySmoke15y
Utility function fail?
2cousin_it15y
I'm not sure everything would have happened as you describe, and thus not sure I prefer the alternative that actually happened. But your questions make me curious: do you also think the US was game-theoretically right to attack Iraq and will be right to attack Iran because those countries didn't do "whatever was necessary" to convince you they aren't developing WMDs?
7Wei Dai15y
My understanding is that the Iraq invasion was done mainly to test the "spread democracy" strategy, which the Bush administration believed in, and WMDs were more or less an excuse. Since that didn't work out so well, there seems to be little chance that Iran will be attacked in a similar way. Game theoretically, physically invading a country to stop WMDs is much too costly, and not a credible threat, especially since lots of countries have already developed WMDs without being invaded.
1khafra11y
I think that, if we stay out of the least convenient possible world, this is impractical because of the uncertainty of intel. In a world where there was genuine uncertainty whether Saddam Hussein was building WMD, it seems like it would be difficult to gain enough certainty to launch against another country in peacetime. At least, until that other country announced "we have 20 experimental nuclear missiles targeted at major US cities, and we're going to go ahead with our first full-scale test of a nuclear warhead. Your move." Today, we see this problem with attribution for computer network exploitation from (presumably) state actors. It's a reasonably good parallel to MAD, because we have offensive ability, but little defensive ability. In this environment, we haven't really seen computer network attacks used to control the development of intrusion/exploitation capabilities by state or even private actors (at least, as far as I know of).
3Nornagest11y
ICBMs didn't exist at the time -- intercontinental capability didn't arrive until the Soviet R-7 missile in 1957, eight years after the first successful Russian nuclear test, and no missiles were tested with nuclear warheads until 1958 -- making the strategic picture dependent at least as much on air superiority as on the state of nuclear tech. Between geography and military focus, that would probably have given the United States a significant advantage if they'd chosen to pursue this avenue in the mid-to-late 1940s. On the other hand, intelligence services were pretty crude in some ways, too; my understanding is that the Russian atomic program was unknown to the American spook establishment until it was nearing completion.
1[anonymous]15y
Sounds like Machiavelli: The Prince, Chapter 8
0Dustin15y
Agreed. Unfortunately, my time is short and my book list is long.
7gjm15y
Read it anyway.

There seems to be a free online version of Schelling's book at http://www.questiaschool.com/read/94434630.

6roland15y
It is free for a few pages... then you have to sign up which is not free.
3cousin_it15y
I have a .djvu version that I found ages ago in some torrent, if anyone's interested PM me and I will email it to you (no webserver handy ATM, sorry).
5Alicorn15y
Send it to me, I'll host it on my site (for the time being, at least). alicorn@elcenia.com
3cousin_it15y
Sent. Did you receive it?

I can't open the file myself, apparently, but I've uploaded it here.

7Gordon Seidoh Worley15y
I have converted it to plain text and PDF for everyone's convenience. I don't care much for DjVu, even though it is a better format, because I have a much nicer viewer that I use that lets me quickly annotate a PDF. For now you can download it from here (this probably won't stay up forever, though): * edited to remove * edited to remove

If you don't want to leave them up: text and PDF. Thanks!

1Gordon Seidoh Worley15y
Thanks. I have a monthly transfer quota and I already come close. A big PDF might have pushed me over.
3saturn15y
Here's a free djvu viewer for most popular operating systems.
2Alicorn15y
It's possible I just fail at downloading things, but the Mac version seems to be not there.
1saturn15y
I've mirrored the latest mac and windows versions of the djvu viewer, along with the original and converted versions of the book, here
1Bo10201015y
It's because of Sourceforge's long spiral into uselessness. Google for the filename and you'll find a mirror.
1roland15y
Drop us a note with link when it's online.
4Vladimir_Nesov15y
It's available on the Kad p2p network, as are most of the sufficiently popular technical books.
1roryokane11y
I was able to download a copy from http://www.manyebooks.org/download/The_Strategy_of_Conflict.html, which links to http://www.en8848.com.cn/d/file/soft/Nonfiction/Obooks/201012/8bbb35724dcac415a5ecd74a62b2ba97.rar as the actual download link. That version is a 4.8 MB PDF file. It has equivalent image quality to the 17.4 MB PDF file hosted by Alicorn and is a smaller file.
1[anonymous]15y
That link appears to lead to a copy of Nathaniel Hawthorne's "The Scarlet Letter".

The radio example is strangely apt given the most blatant manipulation of this sort I've experienced has involved people texting saying 'I'm already at [my preferred pub] for the evening: meet here? Sorry but will be out of reception', or people emailing asking you to deal with something and then their out of office appearing on your response.

2RandomThinker11y
It's amazing how good humans are at this sort of thing, by instinct. I'm reading the book Hierarchy in the Forrest, which is about tribal bands of humans up to 100k years ago. Without law and social structure, they basically solved all of their social equality problems by game theory. And depending on when precisely you think they evolved this social dynamic, they may have had hundreds of thousands of years to perfect it before we became hierarchical again. http://www.amazon.com/Hierarchy-Forest-Evolution-Egalitarian-Behavior/dp/0674006917 If you look at rationality on a spectrum, this type of game theory isn't on the most enlightened/sophisticated form of it. Thugs, bullies, despots and drama queens are very good at this sort of manipulation. Rather it's basically the most primitive instinctive part of human reasoning. However, that's not to say it doesn't work. The original post's description of not wanting to look yourself in the mirror afterwards is very apt.

"Anyone, no matter how crazy, who you utterly and completely ignore will eventually stop bothering you." quote from memory from Spider Robinson, context was working in a mental hospital so escalation to violence wasn't a risk.

In the radio example, there is no way for me to convince you that the receive capability is truly broken. Given that, there is no reason for me to actually break the receive ability, and you should distrust any claim on my part that the receive ability has been broken.

But Schelling must have been able to follow this reasoning, so what point was he trying to illustrate with the radio example?

It can be difficult to pretend to be unable to hear someone on the other end of a two way communication. The impulse not to interrupt is strong enough to cause detectable irregularities in speech. Actually breaking, or at least turning off, the receive capability might be essential to maintaining the impression on the other end that it's broken.

3Jonathan_Graehl15y
A banal observation: everyone is assuming that the radio speaker is disabled while I transmit (or that I use an earpiece that the microphone can't overhear. I'm guessing the first is actually the case with handheld radios.
3wedrifid15y
It is difficult to consciously pretend. That's why our brains don't leave this particular gambit up to our consciousness. It does seem that this, as you say, involves genuinely breaking the receive capability, but evidently the actual cost in terms of information wasted is worth the price.
9Technologos15y
Even if I distrust that you have a broken radio, as long as I prefer going to meet you (accepting the additional cost therein entailed) to never meeting you or meeting after an indefinitely long time, I will still go to wherever you say you are. If both people's radios are unbroken after the crash, whoever transmits the "receiver broken" signal probably gets the easier time of it. This game is essentially the (repeated?) game of chicken, as long as "claim broken receiver and other person capitulates" > "both players admit unbroken" > "capitulate to other person's claim" > "neither player capitulates while both claim broken receivers". Conveniently, this appears to be the broader point Schelling was trying to make. Flamboyant disabling of one's options often puts one in a better negotiating position. Hence, the American garrison in West Berlin.
5cousin_it15y
Actually the book tells about a psychological experiment conducted similarly to the situation I described, but there the subjects were told outright by the trustworthy experimenter that their partner couldn't learn their whereabouts. But I still think that average-rational humans would often fall for the radio trick. Expected utility suggests you don't have to believe your partner 100%; a small initial doubt reinforced over a day could suffice. And yep, the problem of making your precommitments trustworthy is also discussed in much detail in the book.
8Wei Dai15y
There may be an interesting connection between this example and AIs knowing each other's source code. The idea is, if one AI can unilaterally prove its source code to another without the receiver being able to credibly deny receipt of the proof, then it should change its source code to commit to an unfair agreement that favors itself, then prove this. If it succeeds in being the first to do so, the other side then has no choice but to accept. So, Freaky Fairness seems to depend on the details of the proof process in some way.
9orthonormal15y
This presumes that the other side obeys standard causal decision theory; in fact, it's an illustration of why causal decision theory is vulnerable to exploitation if precommitment is available, and suggests that two selfish rational CDT agents who each have precommitment options will generally wind up sabotaging each other. This is a reason to reject CDT as the basis for instrumental rationality, even if you're not worried that Omega is lurking around the corner.
4Wei Dai15y
You can reject CDT but what are you going to replace it with? Until Eliezer publishes his decision theory and I have a chance to review it, I'm sticking with CDT. I thought cousin_it's result was really interesting because it seems to show that agents using standard CDT can nevertheless convert any game into a cooperative game, as long as they have some way to prove their source code to each other. My comment was made in that context, pointing out that the mechanism for proving source code needs to have a subtle property, which I termed "consensual".

One obvious "upgrade" to any decision theory that has such problems is to discard all of your knowledge (data, observations) before making any decisions (save for some structural knowledge to leave the decision algorithm nontrivial). For each decision that you make (using given decision algorithm) while knowing X, you can make a conditional decision (using the same decision algorithm) that says "If X, then A else B", and then recall whether X is actually true. This, for example, mends the particular failure of not being able to precommit (you remember that you are on the losing branch only after you've made the decision to do a certain disadvantageous action if you are on the losing branch).

3Wei Dai15y
You can claim that you are using such a decision theory and hence that I should find your precommitments credible, but if you have no way of proving this, then I shouldn't believe you, since it is to your advantage to have me believe you are using such a decision theory without actually using it. From your earlier writings I think you might be assuming that AIs would be intelligent enough to just know what decision algorithms others are using, without any explicit proof procedure. I think that's an interesting possibility to consider, but not a very likely one. But maybe I'm missing something. If you wrote down any arguments in favor of this assumption, I'd be interested to see them.
4Vladimir_Nesov15y
That was an answer for your question about what should you replace CDT with. If you won't be able to convince other agents that you now run on timeless CDT, you gain a little smaller advantage than otherwise, but that's a separate problem. If you know that your claims of precommitment won't be believed, you don't precommit, it's that easy. But sometimes, you'll find a better solution than if you only lived in a moment. Also note that even if you do convince other agents about the abstract fact that your decision theory is now timeless, it won't help you very much, since it doesn't prove that you'll precommit in a specific situation. You only precommit in a given situation if you know that this action makes the situation better for you, which in case of cooperation means that the other side will be able to tell whether you actually precommited, and this is not at all the same as being able to tell what decision theory you use. Since using a decision theory with precommitment is almost always an advantage, it's easy to assume that a sufficiently intelligent agent always uses something of the sort, but that doesn't allow you to know more about their actions -- in fact, you know less, since such agent has more options now.
4Wei Dai15y
Yes, I see that your decision theory (is it the same as Eliezer's?) gives better solutions in the following circumstances: * dealing with Omega * dealing with copies of oneself * cooperating with a counterpart in another possible world Do you think it gives better solutions in the case of AIs (who don't initially think they're copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?
5Eliezer Yudkowsky15y
Unless, of course, you already know that most AIs will go ahead and "suicidally" deny the unfair agreement.
1cousin_it15y
Yes. In the original setting of FF the tournament setup enforces that everyone's true source code is common knowledge). Most likely the problem is hard to solve without at least a little common knowledge.
1Wei Dai15y
Hmm, I'm not seeing what common knowledge has to do with it. Instead, what seems necessary is that the source code proving process must be consensual rather than unilateral. (The former has to exist, and the latter cannot, in order for FF to work.) A model for a unilateral proof process would be a trustworthy device that accepts a string from the prover and then sends that string along with the message "1" to the receiver if the string is the prover's source code, and "0" otherwise. A model for a consensual proof process would be a trustworthy device that accepts from the prover and verifier each a string, and sends a message "1" to both parties if the two strings are identical and represent the prover's source code, and "0" otherwise.
0cousin_it15y
In your second case one party can still cheat by being out of town when the "1" message arrives. It seems to me that the whole endeavor hinges on the success of the exchange being common knowledge.
0Wei Dai15y
I'm not getting you. Can you elaborate on which party can cheat, and how. And by "second case" do you mean the "unilateral" one or the "consensual" one?
0cousin_it15y
The "consensual" one. For a rigorous demonstration, imagine this: while preparing to play the Freaky Fairness game, I managed to install a subtle bug into the tournament code that will slightly and randomly distort all source code inputs passed to my algorithm. Then I submit some nice regular quining-cooperative program. In the actual game your program will assume I will cooperate, while mine will see you as a defector and play to win. When the game gives players an incentive to misunderstand, even a slight violation of "you know that I know that you know..." can wreak havoc, hence my emphasis on common knowledge.
0Wei Dai15y
I see what you're saying now, but this seems easy to prevent. Since you have changed your source code to FF, and I know you have, I can simply ask you whether you believe I am a defector, and treat you as a defector if you say "yes". I know your source code so I know you can't lie (specify Freaky Fairness to include this honesty). Doesn't that solve the problem? ETA: There is still a chance of accidental miscommunication, but you no longer have an incentive to deliberately cheat.
1cousin_it15y
In this solution you have an incentive to similarly be outa town when I say "no". Think through it recursively. Related topics: two generals problem, two-phase commit.
0Wei Dai15y
Ok, let's say that two FFs can establish a cryptographically secure channel. The two players can each choose to block the channel at any time, but it can't read, inject, delete, or change the order of messages. Is that sufficient to make it arbitrarily unlikely for any player to put the FFs into a state where FF1 will treat FF2 as a cooperator, but FF2 will treat FF1 as a defector? I think the answer is yes, using the following protocol: FF1 will start by sending a 1 or 0 (chosen randomly) to FF2. After that, each FF will send a 1 or 0 after it receives a 1 or 0 from the other, keeping the number of 1s sent no more than the number of 1s received plus one. If an FF receives N 1s before a time limit is reached, it will threat the other as a cooperator, otherwise as a defector. Now in order to cheat, a player would have to guess when to block the channel, and the probability of guessing the right time goes to 0 as N goes to infinity. This is not necessarily the most efficient protocol, but it may be good enough as a proof of concept. On the other hand, the "merger by secure joint construction" approach seems to have the advantage of not having to deal with this problem. Or is there an analogous one that I'm not seeing?

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest

Almost all human interactions are the third type. In fact, I think of this not as 3 parts, but as one thing - strategy, which has (at least) 2 special cases that have been studied: zero-sum and cooperative positive-sum. These special cases are interesting not because they occur, but because they illuminate aspects of the whole.

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest.

I'm still trying to figure out a good description of "cooperative game theory". What do you think of this:

Cooperative game theory studies situations where agreements to cooperate can be enforced, and asks which agreements and outcomes will result. This typically involves considerations of individual rationality and fairness.

2Alicorn15y
What do you mean by "enforced"?
4Wei Dai15y
It means we can assume that once an agreement is made, all the agents will follow it. For example, the agreement may be a contract enforceable by law, or enforced by being provably coded into the agents' decision algorithms, or just by the physics of the situation like in my black hole example.
0Jack15y
I assume he means punishing defectors.

This seems interesting in the horrifying way I have been considering excising from myself due to the prevalence of hostile metastrategic bashes: that is, people find you are the kind of person who flat-out welcomes game theory making a monster of em, and then refuses to deal with you, good day, enjoy being a sociopath, and without the charm, to boot.

I haven't read this book, but I can't see how Schelling would convincingly make this argument:

Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.

It's true that enemy spies can provide a useful function, in allowing you to credibly signal self-serving information. However, deliberate, public... (read more)

I don't quite see how conferring immunity on foreign spies would degrade the information they could access. Deliberately and openly feeding them information is going to be pointless, as they obviously can't trust you. But encouraging foreign spies by not prosecuting them should not negatively affect their ability to obtain and relay information.

1SilasBarta15y
I still don't see it. It doesn't seem like any deliberate action could ever credibly signal "information only a spy could get". It's almost a problem of self-contradiction: "I'm trying to tell you something that I'm trying to hide from you." To put it in more concrete terms, what if one day the US lifted all protocols for protecting information at its military bases and defense contractors? Would foreign espionage agencies think, "woot! Motherload!" Or would they think, "Sure, and it's probably all junk ... where are the real secrets?"
5Cyan15y
The signal isn't to the opposing power -- it's to potential spies. You make recruiting easier for the opponent because you want to establish a fact about your plans and goals. The opponent will always have the problem of determining whether or not you're feeding its spies disinformation, but having more independent spies can help with that.
1SilasBarta15y
So again, take it one step further: what would be wrong with subsidizing foreign spies? Say, pay a stipend to an account of their choice. Wouldn't that make people even more willing to be spies?
6Cyan15y
That would probably work too, provided you could conclusively demonstrate that the payment system wasn't some kind of trap (to address the concerns of potential spies) or attempt at counter-recruitment (to address the concerns of the opponent). That seems more difficult than simply declaring a policy of immunity and demonstrating it by not prosecuting caught spies. ETA: Oh yeah, you also have to confirm that the people you are paying are actually doing the job you are paying them for, to wit, conveying accurate information to the opponent. It can't just be a "sign up for anonymous cash payments" scheme. I can't think of a way to simultaneously guarantee all these conditions, but if there is a way I'm not imaginative enough to see, then yeah, subsidizing the opponent's spies would work.
6eirenicon15y
You're not aiding spies in getting information, you're just lowering the risk they take, which encourages more spying. Someone in high position could leak information, only risking being fired, not being shot. This does not change the reliability of the information, which, in spying, is always in question anyway.

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.

Could you clarify that last bit for me? You seem to have a valid point but I... (read more)

7cousin_it15y
Of course some real world zero-sum games are ruthless too, but I haven't seen anything as bad as the nuke game, and it is variable-sum. Peace benefits everyone, but if one side in an arms race starts getting ahead, both sides know there will be war which harms both. If the game was zero-sum, war would've happened long ago and with weaker weapons. The book gives an example of both Soviets and Americans expending effort on submarine-detection technologies while both desperately hoping that such technologies don't exist, because undetectable submarines with ICBMs are such a great retaliation guarantee that no one attacks anyone.
0wedrifid15y
Thanks, that makes sense. It also brings to mind some key points from Robin's talk on existential risks.
3tuli15y
Just remember that once you nuke (that is destroy) something, you have left the bounds of zero-sum game and quite likely entered a negative sum game (though you may end up having positive outcome, the sum is negative).
1bentarm15y
Well isn't this exactly the problem cousin_it is referring to when the game is non-zero sum? It means that I might need to take 1000 utils from you in order to gain 50 utils for myself. (or even: I might need to take 1000 utils from you in order to limit my losses to 50 utils).
1wedrifid15y
It's possible that it will be a negative sum. It is also possible in principle that it has become a positive sum. The sign of the 'sum' doesn't actually seem to be the important part of the quoted context here, rather the presence or absence of a shared interest.
1christopherj10y
In two player zero sum games, vengeance (hurting self to hurt other more) is impossible, as are threats and destruction in general -- because the total score is always the same. They are ruthless in that to gain score you must take it from the other player (also eliminates cooperation), but there can be no nuking. If the game is variable sum (or zero sum with extra players), you again gain the ability to unilaterally and unavoidably lower someone's score (the score can be destroyed in variable sum games, or transferred to the other players in zero sum games, allowing for vengeance, punishment, destruction, team cooperation, etc.
1wedrifid10y
I have reread the context and I find I concur with wedrifid_2009. Vengeance is impossible, threats are irrelevant but destruction most certainly is not. Don't confuse the arbitrary constraint "the total score is always the same" with the notion that nothing 'destructive' can occur in such a game. What is prevented (to rational participants) is destruction for the purpose of game theoretic influence. Consider a spherical cow in (a spaceship with me in a) vacuum. We are stranded and have a fixed reserve of energy. I am going to kill the spherical cow. I will dismember her. I will denature the proteins that make up her flesh. Then I will eat her. Because destroying her means I get to use all the energy and oxygen for myself. This includes the energy that was in the cow before I destroyed her. It's nothing personal. There was no threat. I was not retaliating. There was neither punishment nor cooperation. Just destruction. ie. One of these things is not like the other things, one of these things just doesn't belong:
1Lumifer10y
It may be that you're using a restrictive definition of zero-sum games, but generally speaking that is not true because of the difference between the final outcome and the intermediate score-keeping. Consider e.g. a fight to the death or a computer-game match with a clear winner. The outcome is zero-sum: one player wins, one player loses, the end. But in the process of the fight the score varies and things like hurting self to hurt the other more are perfectly possible and can be rational tactics.
-1Vaniver10y
I think you're mixing levels- in a match with a clear winner, "hurting self" properly means "make my probability of losing higher" not "reduce my in-game resources." I can't reduce my chance of winning to reduce my opponent's chance of winning by more- the net effect is increasing my chance of winning.
2Lumifer10y
I am not so much mixing levels as pointing out that different levels exist.
-2christopherj10y
You're confusing yourself because you're mixing scoring systems -- first you say that the game is zero sum, win or lose, then you talk about variable sum game resources. In a zero sum game, the total score is always the same; you can either steal points or give them away, but can never destroy them. If the total score changes throughout the game, then you're not talking about a zero sum game. There's no different levels, though you can play a zero sum game as a variable sum game (I won while at full health!).

I am in the middle of reading this book, due to this post. I strongly second the recommendation to read it.

Why should it be advantageous to break your reciever? You've been dropped in a wild, mountainous region, with only as many supplies as you can carry down on your parachute. Finding and coordinating with another human is to your advantage, even if you don't get extracted immediately upon meeting the objective. The wilderness is no place to sit down on a hilltop. You need to find food, water, shelter and protection from predators, and doing this with someone else to help you is immensely easier. We formed tribes in the first place for exactly this reason.

3Decius11y
Because both people are competent and capable of survival alone, but its more work to travel; by having a radio that transmits but doesn't receive, you can transmit your location and the fact that you can't receive, and the other person is forced to come to you rather than come to the point halfway between the two of you.
[-][anonymous]14y00

This is where rationality and logic meet. If, upon landing, you knew that the other person had broken their own radio in order to avoid work, you would most likely meet up with them anyway. Being that you are away from civilization and will only be picked up upon rendezvous, it is in your own best interest to meet, and then reveal the other's deception upon pickup. Even if you are not given their share, you still have your own, which is the original goal, and you have not lost anything. Also, the work that it may have taken to reach a coordinate acceptable to both persons may be equal or more than the work put out if you had gone straight to the other person.