Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Thomas C. Schelling's "Strategy of Conflict" - Less Wrong

76 Post author: cousin_it 28 July 2009 04:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (148)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 29 July 2009 12:32:41PM *  12 points [-]

Shelling was actually the less ruthless of the pioneers of game theory. The other pioneer was Von Neumann who advocated a unilateral nuclear attack on the USSR before they developed their own nuclear weapons.

One thing I don't understand, why didn't the US announce at the end of World War II that it will nuke any country that attempts to develop a nuclear weapon or conducts a nuclear bomb test? If it had done that, then there would have been no need to actually nuke anyone. Was game theory invented too late?

Comment author: RichardKennaway 29 July 2009 01:30:48PM *  13 points [-]

You are the President of the US. You make this announcement. Two years later, your spies tell you that the UK has a well-advanced nuclear bomb research programme. The world is, nevertheless, as peaceful on the whole as in fact it was in the real timeline.

Do you nuke London?

Comment author: Wei_Dai 29 July 2009 02:03:41PM *  2 points [-]

I'd give the following announcement: "People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date]." Well, I'd go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.

Comment author: Larks 15 December 2010 11:37:54PM 5 points [-]

The UK bomb was developed with the express purpose of providing independance from the US. If the US could keep the USSR nuke-free there'd be less need for a UK bomb. Also, it's possible that the US could tone down its anti-imperialist rhetoric/covert funding so as to not threaten the Empire.

Comment author: UnholySmoke 29 July 2009 02:38:44PM 13 points [-]

I can think, straight away, of four or five reason why this would have been very much the wrong thing to do.

  • You make an enemy of your biggest allies. Nukes or no, the US has never been more powerful than the rest of the world put together.
  • You don't react to coming out of one Cold War by initiating another.
  • This strategy is pointless unless you plan to follow through. The regime that laid down that threat would either be strung up when they launched, or voted straight out when they didn't.
  • Mutually assured destruction was what stopped nuclear war happening. Setting one country up as the Guardian of the Nukes is stupid, even if you are that country. I'm not a yank, but I believe this sort of idea is pretty big in the constitution.
  • Attacking London is a shortcut to getting a pounding. This one's just conjecture.

Basically he was about ruthlessness for the good of humanity.

Yeah I think the clue is in there. Better to be about the good of humanity, and ruthless if that's what's called for. Setting yourself up as 'the guy who has the balls to make the tough decisions' usually denotes you as a nutjob. Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Comment author: orthonormal 29 July 2009 06:41:52PM 20 points [-]

Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Survivorship bias. There were some very near misses (Cuban Missile Crisis, Stanislav Petrov, etc.), and it seems reasonable to conclude that a substantial fraction of the Everett branches that came out of our 1946 included a global thermonuclear war.

I'm not willing to conclude that von Neumann was right, but the fact that we avoided nuclear war isn't clear proof he was wrong.

Comment author: Vladimir_Nesov 29 July 2009 03:28:49PM 1 point [-]

You make an enemy of your biggest allies.

If the allies are rational, they should agree that it's in their interest to establish this strategy. The enemy of everyone is the all-out nuclear war.

Comment author: James_K 29 July 2009 10:13:39PM 11 points [-]

This strikes me as a variant of the <a href="http://en.wikipedia.org/wiki/Ultimatum_game">ultimatum game</a>. The allies would have to accept a large asymmetry of power. If even one of them rejects the ultimatum you're stuck with the prospect of giving up your strategy (having burned most or all of your political capital with other nations), or committing mass murder.

When you add in the inability of governments to make binding commitments, this doesn't strike me as a viable strategy.

Comment author: Vladimir_Nesov 29 July 2009 10:41:00PM 5 points [-]

Links in the Markdown syntax are written like this:

[ultimatum game](http://en.wikipedia.org/wiki/Ultimatum_game)

Comment author: RichardKennaway 29 July 2009 02:18:25PM 6 points [-]

The entire civilised world (which at this point does not include anyone who is still a member of the US government) is in uproar. Your attempts at secret diplomacy are leaked immediately. The people of the UK make tea in your general direction. Protesters march on the White House.

When do you push the button, and how will you keep order in your own country afterwards?

What I'm really getting at here is that your bland willingness to murder millions of non-combatants of a friendly power in peacetime because they do not accede to your empire-building unfits you for inclusion in the human race.

Also, that it's easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.

Comment author: loqi 26 April 2011 01:34:43AM 11 points [-]

So says the man from his comfy perch in an Everett branch that survived the cold war.

What I'm really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.

Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.

Comment author: shokwave 26 April 2011 02:21:41AM 4 points [-]

I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race - and he's right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

Comment author: Wei_Dai 26 April 2011 06:38:36AM 9 points [-]

anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race

You do realize that the point of my proposed strategy was to prevent the destruction of Earth (from a potential nuclear war between the US and USSR), and not "empire building"?

I don't understand why Richard and you consider MAD acceptable, but my proposal beyond the pale. Both of you use the words "friendly power in peacetime", which must be relevant somehow but I don't see how. Why would it be ok (i.e., fit for inclusion in the human race) to commit to murdering millions of non-combatants of an enemy power in wartime in order to prevent nuclear war, but not ok to commit to murdering millions of non-combatants of a friendly power in peacetime in service of the same goal?

A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

I also took Richard's comment personally (he did say "your bland willingness", emphasis added), which is probably why I didn't respond to it.

Comment author: JoshuaZ 28 April 2011 04:16:15AM *  7 points [-]

The issue seems to be that nuking a friendly power in peacetime feels to people pretty much like a railroad problem where you need to shove the fat person. In this particular case, since it isn't a hypothetical, the situation has been made all the more complicated by actual discussion of the historical and current geopolitics surrounding the situation (which essentially amounts to trying to find a clever solution to a train problem or arguing that the fat person wouldn't weigh enough.) The reaction is against your apparent strong consequentialism along with the fact that your strategy wouldn't actually work given the geopolitical situation. If one had an explicitly hypothetical geopolitical situation where this would work and then see how they respond it might be interesting.

Comment author: shokwave 29 April 2011 02:26:29AM 1 point [-]

I also took Richard's comment personally (he did say "your bland willingness", emphasis added), which is probably why I didn't respond to it.

Well, this is evidence against using second-person pronouns to avoid "he/she".

Comment author: JoshuaZ 29 April 2011 02:39:03AM 0 points [-]

He could easily have said "bland willingness to" rather than "your bland willingness" so that doesn't seem to be an example where a pronoun is necessary.

Comment author: shokwave 29 April 2011 02:45:19AM 0 points [-]

No, it's an example where using "you" has caused someone to take something personally. Given that the "he/she" problem is that some people take it personally, I haven't solved the problem, I've just shifted it onto a different group of people.

Comment author: loqi 26 April 2011 06:53:40AM 7 points [-]

I was commenting on what he said, not guessing at his beliefs.

I don't think you've made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it's not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn't render me unfit for existence.

Anyone willing to deploy a nuclear weapon has a "bland willingness to slaughter". Anyone employing MAD has a "bland willingness to destroy the entire human race".

I suspect that you have no compelling proof that Wei Dai's hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a "bland willingness to torture someone for 50 years" and sentence him to exclusion from humanity?

Comment author: shokwave 29 April 2011 02:23:40AM *  3 points [-]

What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway's comment as "comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race" when what he was saying was "horrendous act unfits you for inclusion in the human race".

Comment author: RichardKennaway 27 April 2011 10:00:35PM *  -2 points [-]

I was rather intemperate, and on a different day maybe I would have been less so; or maybe I wouldn't. I am sorry that I offended Wei Dai.

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens. This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.

You compare the problem to Eliezer's one of TORTURE vs SPECKS, but there is an important difference between them. TORTURE vs SPECKS is fiction, while Wei Dai spoke of an actual juncture in history in living memory, and actions that actually could have been taken.

What is the TORTURE vs SPECKS problem? The formulation of the problem is at that link, but what sort of thing is this problem? Given the followup posting the very next day, it seems likely to me that the intention was to manifest people's reactions to the problem. Perhaps it is also a touchstone, to see who has and who has not learned the material on which it stands. What it is not is a genuine problem which anyone needs to solve as anything but a thought experiment. TORTURE vs SPECKS is not going to happen. Other tradeoffs between great evil to one and small evils to many do happen; this one never will. While 50 years of torture is, regrettably, conceivably possible here and now in the real world, and may be happening to someone, somewhere, right now, there is no possibility of 3^^^3 specks. Why 3^^^3? Because that is intended to be a number large enough to produce the desired conclusion. Anyone whose objection is that it isn't a big enough number, besides manifesting a poor grasp of its magnitude, can simply add another uparrow. The problem is a fictional one, and as such exhibits the reverse meta-causality characteristic of fiction: 3^^^3 is in the problem because the point of the problem is for the solution to be TORTURE; that TORTURE is the solution is not caused by an actual possibility of 3^^^3 specks.

In another posting a year later, Eliezer speaks of ethical rules of the sort that you just don't break, as safety rails on a cliff he didn't see. This does not sit well with the TORTURE vs SPECKS material, but it doesn't have to: TORTURE vs SPECKS is fiction and the later posting is about real (though unspecified) actions.

So, the Cold War. Wei Dai would have the US after WWII threatening to nuke any country attempting to develop or test nuclear weapons. To the scenario of later discovering that (for example) the UK has a well-developed covert nuclear program, he responds:

I'd give the following announcement: "People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date]." Well, I'd go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.

It should, should it? And that, in Wei's mind, is adequate justification for pressing the button to kill millions of people for not doing what he told them to do. Is this rationality, or the politics of two-year-olds with nukes?

I seem to be getting intemperate again.

It's a poor sort of rationality that only works against people rational enough to lose. Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats. And so on. How's TDT/UDT with self-modifying agents modelling themselves and each other coming along?

This is fantasy masquerading as rationality. I stand by this that I said back then:

[I]t's easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.

To make these threats, you must be willing to actually do what you have said you will do if your enemy does not surrender. The moment you think "but rationally he has to surrender so I won't have to do this", you are making an excuse for yourself to not carry it out. Whatever belief you can muster that you would will evaporate like dew in the desert when the time comes.

How are you going to launch those nukes, anyway?

Comment author: loqi 28 April 2011 04:09:01AM 6 points [-]

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens.

Using the word "intemperate" in this way is a remarkable dodge. Wei Dai's comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai's statements. The tone of my response was deliberate and quite restrained relative to how I felt.

This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.

Huh? You're "not excusing" the extremity of your interpersonal behavior on the grounds that the topic was fictional, and fiction is more extreme than reality? And then go on to explain that you don't behave similarly toward Eliezer with respect to his position on TORTURE vs SPECKS because that topic is even more fictional?

Is this rationality, or the politics of two-year-olds with nukes?

Is this a constructive point, or just more gesturing?

As for the rest of your comment: Thank you! This is the discussion I wanted to be reading all along. Aside from a general feeling that you're still not really trying to be fair, my remaining points are mercifully non-meta. To dampen political distractions, I'll refer to the nuke-holding country as H, and a nuke-developing country as D.

You're very focused on Wei Dai's statement about backward induction, but I think you're missing a key point: His strategy does not depend on D reasoning the way he expects them to, it's just heavily optimized for this outcome. I believe he's right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.

Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats.

Don't see how this follows. If both countries precommit, D gets bombed until it halts or otherwise cannot continue development. While this is not H's preferred outcome, H's entire strategy is predicated on weighing irreversible nuclear proliferation and its consequences more heavily than the millions of lives lost in the event of a suicidal failure to comply. In other words, D doesn't wield sufficient power in this scenario to affect H's decision, while H holds sufficient power to skew local incentives toward mutually beneficial outcomes.

Speaking of nuclear proliferation and its consequences, you've been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai's strategy. Talking about "murdering millions" without at least framing it alongside the horror of proliferation is not productive.

How are you going to launch those nukes, anyway?

Practical considerations like this strike me as by far the best arguments against extreme, theory-heavy strategies. Messy real-world noise can easily make a high-stakes gambit more trouble than it's worth.

Comment author: RichardKennaway 28 April 2011 02:26:41PM 3 points [-]

Is this rationality, or the politics of two-year-olds with nukes?

Is this a constructive point, or just more gesturing?

It is a gesture concluding a constructive point.

You're very focused on Wei Dai's statement about backward induction, but I think you're missing a key point: His strategy does not depend on D reasoning the way he expects them to, it's just heavily optimized for this outcome. I believe he's right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.

This is a distinction without a difference. If H bombs D, H has lost (and D has lost more).

If both countries precommit, D gets bombed until it halts or otherwise cannot continue development.

That depends on who precommits "first". That's a problematic concept for rational actors who have plenty of time to model each others' possible strategies in advance of taking action. If H, without even being informed of it by D, considers this possible precommitment strategy of D, is it still rational for H to persist and threaten D anyway? Or perhaps H can precommit to ignoring such a precommitment by D? Or should D already have anticipated H's original threat and backed down in advance of the threat ever having been made? I am reminded of the Forbidden Topic. Counterfactual blackmail isn't just for superintelligences. As I asked before, does the decision theory exist yet to handle self-modifying agents modelling themselves and others, demonstrating how real actions can arise from this seething mass of virtual possibilities?

Then also, in what you dismiss as "messy real-world noise", there may be a lot of other things D might do, such as fomenting insurrection in H, or sharing their research with every other country besides H (and blaming foreign spies), or assassinating H's leader, or doing any and all of these while overtly appearing to back down.

The moment H makes that threat, the whole world is H's enemy. H has declared a war that it hopes to win by the mere possession of overwhelming force.

Speaking of nuclear proliferation and its consequences, you've been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai's strategy. Talking about "murdering millions" without at least framing it alongside the horror of proliferation is not productive.

I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror. loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

Comment author: loqi 28 April 2011 10:22:48PM *  4 points [-]

This is a distinction without a difference. If H bombs D, H has lost

This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really "loses" by bombing D (meaning H considers this outcome less preferable than proliferation), then H's threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.

That depends on who precommits "first". [...]

This entire paragraph depends on the above assumption. If I grant you that assumption and (artificially) hold constant H's intent to precommit, then we've entered the realm of bluffing, and yes, the game tree gets pathological.

loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

My mention of Everett branches was an indirect (and counter-productive) way of accusing you of hindsight bias.

Your talk of "convincing you" is distractingly binary. Do you admit that the severity and number of close calls in the Cold War is relevant to this discussion, and that these are positively correlated with the underlying justification for Wei Dai's strategy? (Not necessarily its feasibility!)

I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror.

Let's set aside scale and comparisons for a moment, because your position looks suspiciously one-sided. You fail to see the horror of nuclear proliferation? If I may ask, what is your estimate for the probability that a nuclear weapon will be deployed in the next 100 years? Did you even ask yourself this question, or are you just selectively attending to the low-probability horrors of Wei Dai's strategy?

Then also, in what you dismiss as "messy real-world noise"

Emphasis mine. You are compromised. Please take a deep breath (really!) and re-read my comment. I was not dismissing your point in the slightest, I was in fact stating my belief that it exemplified a class of particularly effective counter-arguments in this context.

Comment author: Viliam_Bur 05 September 2011 11:11:14AM 2 points [-]

You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else.

A model of reality, which assumes that an opponent must be rational, is an incorrect model. At best, it is a good approximation that could luckily return a correct answer in some situations.

I think this is a frequent bias for smart people -- assuming that (1) my reasoning is flawless, and (2) my opponent is on the same rationality level as me, therefore (3) my opponent must have the same model of situation as me, therefore (4) if I rationally predict that it is best for my opponent to do X, my opponent will really do X. And then my opponent does non-X, and I am like: WTF?!

Comment author: Vladimir_Nesov 29 July 2009 02:24:56PM 3 points [-]

because they do not accede to your empire-building

Fail.

Comment author: Kaj_Sotala 29 July 2009 10:20:21PM 3 points [-]

I think that, by the time you've reached the point where you're about to kill millions for the sake of the greater good, you'd do well to consider all the ethical injunctions this violated. (Especially given all the different ways this could go wrong that UnholySmoke could come up off the top of his head.)

Comment author: Wei_Dai 31 July 2009 07:58:31AM 10 points [-]

Kaj, I was discussing a hypothetical nuclear strategy. We can't discuss any such strategy without involving the possibility of killing millions. Do the ethical injunctions imply that such discussions shouldn't occur?

Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles. Does MAD also violate ethical injunctions? Should it also not have been discussed? (How many different ways could things have gone wrong with MAD?)

Comment author: Kaj_Sotala 02 August 2009 08:14:48PM *  2 points [-]

Do the ethical injunctions imply that such discussions shouldn't occur?

Of course not. I'm not saying the strategy shouldn't be discussed, I'm saying that you seem to be expressing greater certainty of your proposed approach being correct than would be warranted.

(I wouldn't object to people discussing math, but I would object if somebody thought 2 + 2 = 5.)

Comment author: handoflixue 08 August 2011 10:44:19PM 0 points [-]

Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles

And the world as we know it is still around because Stanislav Petrov ignored that order and insisted the US couldn't possibly be stupid enough to actually launch that sort of attack.

I would pray that the US operators were equally sensible, but maybe they just got lucky and never had a technical glitch threaten the existence of humanity.

Comment author: BronecianFlyreme 10 December 2013 05:46:36AM 0 points [-]

Interestingly, it seems to me like the most convenient solution to this problem would be to find some way to make yourself incapable of not nuking anyone who built I nuke. I don't think it's really feasible, but I thought it was worth mentioning just because it matches the article so closely

Comment author: RichardKennaway 10 December 2013 08:12:19AM 0 points [-]

I'm sure all extortionists would find it very convenient to be able to say to their victims while breaking their legs, "It's you that's doing this, not me!" And to have the courts accept that as a valid defence, and jail the victim for committing assault on themselves. But the fact is, we cannot conduct brain surgery on ourselves to excise our responsibility. Is it an ability to be desired?

Comment author: Lumifer 10 December 2013 05:31:57PM 2 points [-]

I'm sure all extortionists would find it very convenient to be able to say to their victims while breaking their legs, "It's you that's doing this, not me!" And to have the courts accept that as a valid defence, and jail the victim for committing assault on themselves.

You probably thought you were kidding. Not

Comment author: eirenicon 29 July 2009 03:04:06PM *  7 points [-]

At the end of WWII, the US's nuclear arsenal was still small and limited. The declaration of such a threat would have made it worth the risk for the USSR to dramatically ramp up their nuclear weapons research, which had been ongoing since 1942. The Soviets tested their first nuke in 1949; at that point or any time earlier, it would have been too late for the US to follow through. They would've had to drop the Marshall Plan and risk starting another "hot war". With their European allies, especially the UK, still struggling economically, the outcome would have been far from assured.

Comment author: CronoDAS 31 July 2009 12:30:03AM 5 points [-]

As a practical matter, this would not have been possible. At the end of World War II, the U.S. didn't have enough nuclear weapons to do much more than threaten to blow up a city or two. Furthermore, intercontinental ballistic missiles didn't exist yet; the only way to get a nuclear bomb to its target was to put it in an airplane and hope the airplane doesn't get shot down before it gets to its destination.

Comment author: Wei_Dai 31 July 2009 01:08:27AM *  14 points [-]

According to this book, in May 1949 (months before the Soviet's first bomb test), the US had 133 nuclear bombs and a plan (in case of war) to bomb 70 Soviet cities, but concluded that this was probably insufficient to "bring about capitulation". The book also mentions that the US panicked and speeded up the production of nuclear bombs after the Soviet bomb test, so if it had done that earlier, perhaps it would have had enough bombs to deter the Soviets from developing them.

Also, according to this article, the idea of using nuclear weapons to deter the development/testing of fusion weapons was actually proposed, by I I Rabi and Enrico Fermi:

They believed that any nation that violated such a prohibition would have to test a prototype weapon; this would be detected by the US and retaliation using the world’s largest stock of atomic bombs should follow. Their proposal gained no traction.

Comment author: thomblake 31 July 2009 03:44:26PM 0 points [-]

But at the end of the war, the US had developed cybernetic anti-aircraft guns to fight the Pacific War, but the Russians did not have them. They had little chance of shooting down our planes using manual sighting.

Comment author: irarseil 30 June 2012 03:14:23PM 4 points [-]

I think you should be aware that lesswrong is read in countries other than the USA, and writing about "our planes" in a forum where not everyone is American to mean "American planes" can lead to misunderstandings or can discourage others from taking part in the conversation.

Comment author: cousin_it 29 July 2009 12:55:23PM *  3 points [-]

How would the US detect attempts to develop nuclear weapons before any tests took place? Should they have nuked the USSR on a well-founded suspicion?

Comment author: Wei_Dai 29 July 2009 01:33:49PM 8 points [-]

How would the US detect attempts to develop nuclear weapons before any tests took place? Should they have nuked the USSR on a well-founded suspicion?

I think from a rational perspective, the answer must be yes. Under this hypothetical policy, if the USSR didn't want to be nuked, then it would have done whatever was necessary to dispel the US's suspicion (which of course it would have voiced first).

Do you really prefer the alternative that actually happened? That is, allow the USSR and many other countries to develop nuclear weapons and then depend on MAD and luck to prevent world destruction? Even if you personally do prefer this, it's hard to see how that was a rational choice for the US.

BTW, please stop editing so much! You're making me waste all my good retorts. :)

Comment author: Vladimir_Nesov 29 July 2009 01:55:47PM *  5 points [-]

Given that there is a nontrivial chance that the policy won't be implemented reliably, and partially because of that the other side will fail to fear it properly, the expected utility of trying to implement this policy seems hideously negative (that is, there is a good chance a city will be nuked as a result, after which the policy crumbles under the public pressure, and after that everyone develops the technology).

Comment author: Wei_Dai 29 July 2009 03:24:00PM 2 points [-]

Given that there is a nontrivial chance that the policy won't be implemented reliably, and partially because of that the other side will fail to fear it properly, the expected utility of trying to implement this policy seems hideously negative

Ok, granted, but was the expected utility less than allowing everyone to develop nuclear weapons and then using a policy of MAD? Clearly MAD has a much lower utility if the policy failed, so the only way it could have been superior is if it was considered much more reliable. But why should that be the case? It seems to me that MAD is not very reliable at all because the chance of error in launch detection is high (as illustrated by historical incidents) and the time to react is much shorter.

Comment author: Vladimir_Nesov 29 July 2009 03:32:40PM *  1 point [-]

The part you didn't quote addressed that: once this policy doesn't work out as planned, it crumbles and the development of nukes by everyone interested goes on as before. It isn't an alternative to MAD, because it won't actually work.

Comment author: Wei_Dai 29 July 2009 04:39:02PM *  10 points [-]

Well, you said that it had a "good chance" of failing. I see your point if by "good chance" you meant probability close to 1. But if "good chance" is more like 50%, then it would still have been worth it. Let's say MAD had a 10% chance of failing:

  • EU(MAD) = .1 * U(world destruction)
  • EU(NH) = .5 * U(one city destroyed) + .05 * U(world destruction)

Then EU(MAD) < EU(NH) if U(world destruction) < 10 U(one city destroyed).

Comment author: eirenicon 30 July 2009 02:08:10AM 9 points [-]

It seems equally rational for the US to have renounced its own nuclear program, thereby rendering it immune to the nuclear attacks of other nations. That is what you're saying, right? The only way for the USSR to be immune from nuclear attack would be to prove to the US that it didn't have a program. Ergo, the US could be immune to nuclear attack if it proved to the USSR that it didn't have a program. Of course, that wouldn't ever deter the nuclear power from nuking the non-nuclear power. If the US prevented the USSR from developing nukes, it could hang the threat of nuclear war over them for as long as it liked in order to get what it wanted. Developing nuclear weapons was the only option the USSR had if it wanted to preserve its sovereignty. Therefore, threatening to nuke the USSR if it developed nukes would guarantee that you would nuke it if they didn't (i.e. use the nuke threat in every scenario, because why not?), which would force the USSR to develop nukes. Expecting the USSR, a country every inch as nationalistic as the US, a country that just won a war against far worse odds than the US ever faced, to bend the knee is simply unrealistic.

Also, what would the long-term outcome be? Either the US rules the world through fear, or it nukes every country that ever inches toward nuclear weaponry and turns the planet into a smoky craphole. I'll take MAD any day; despite its obvious risks, it proved pretty stable.

Comment author: Wei_Dai 30 July 2009 07:58:10PM 3 points [-]

I think there is an equilibrium where the US promises not to use the threat of nukes for anything other than enforcing the no-nuclear-development policy and for obvious cases of self-defense, and it keeps this promise because to not do so would be to force other countries to start developing nukes.

Also, I note that many countries do not have nukes today nor enjoy protection by a nuclear power, and the US does not use the threat of nuclear war against them in every scenario.

Comment author: eirenicon 30 July 2009 08:17:54PM *  9 points [-]

I think that proposed equilibrium would have been extremely unlikely under circumstances where the US (a) had abandoned their pre-war isolationist policies and (b) were about to embark on a mission of bending other nations, often through military force, to their will. Nukes had just been used to end a war with Japan. Why wouldn't the US use them to end the Korean war, for example? Or even to pre-empt it? Or to pre-empt any other conflict it had an interest in? The US acted incredibly aggressively when a single misstep could have sent Soviet missiles in their direction. How aggressive might it have been if there was no such danger? I think you underestimate how much of a show stopper nuclear weapons were in the 40s and 50s. There was no international terrorism or domestic activism that could exact punitive measures on those who threatened to use or used nukes.

Even though the cold war is long over, I am still disturbed by how many nuclear weapons there are in the world. Even so, I would much rather live in this climate than one in which only a single nation - a nation with a long history of interfering with other sovereign countries, a nation that is currently engaged in two wars of aggression - was the only nuclear power around.

Comment author: cousin_it 29 July 2009 02:00:02PM *  2 points [-]

I'm not sure everything would have happened as you describe, and thus not sure I prefer the alternative that actually happened. But your questions make me curious: do you also think the US was game-theoretically right to attack Iraq and will be right to attack Iran because those countries didn't do "whatever was necessary" to convince you they aren't developing WMDs?

Comment author: Wei_Dai 29 July 2009 02:52:25PM 5 points [-]

My understanding is that the Iraq invasion was done mainly to test the "spread democracy" strategy, which the Bush administration believed in, and WMDs were more or less an excuse. Since that didn't work out so well, there seems to be little chance that Iran will be attacked in a similar way.

Game theoretically, physically invading a country to stop WMDs is much too costly, and not a credible threat, especially since lots of countries have already developed WMDs without being invaded.

Comment author: UnholySmoke 29 July 2009 02:42:02PM *  2 points [-]

Should they have nuked the USSR on a well-founded suspicion?

I think from a rational perspective, the answer must be yes. [...] Do you really prefer the alternative that actually happened?

Utility function fail?

Comment author: khafra 26 October 2012 07:50:10PM 1 point [-]

I think that, if we stay out of the least convenient possible world, this is impractical because of the uncertainty of intel. In a world where there was genuine uncertainty whether Saddam Hussein was building WMD, it seems like it would be difficult to gain enough certainty to launch against another country in peacetime. At least, until that other country announced "we have 20 experimental nuclear missiles targeted at major US cities, and we're going to go ahead with our first full-scale test of a nuclear warhead. Your move."

Today, we see this problem with attribution for computer network exploitation from (presumably) state actors. It's a reasonably good parallel to MAD, because we have offensive ability, but little defensive ability. In this environment, we haven't really seen computer network attacks used to control the development of intrusion/exploitation capabilities by state or even private actors (at least, as far as I know of).

Comment author: Nornagest 26 October 2012 09:05:42PM *  3 points [-]

ICBMs didn't exist at the time -- intercontinental capability didn't arrive until the Soviet R-7 missile in 1957, eight years after the first successful Russian nuclear test, and no missiles were tested with nuclear warheads until 1958 -- making the strategic picture dependent at least as much on air superiority as on the state of nuclear tech. Between geography and military focus, that would probably have given the United States a significant advantage if they'd chosen to pursue this avenue in the mid-to-late 1940s. On the other hand, intelligence services were pretty crude in some ways, too; my understanding is that the Russian atomic program was unknown to the American spook establishment until it was nearing completion.