Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.

One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last - it had! The future was bright.

Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it - whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores - hundreds - of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)

An unmitigated disaster. Worse, the technology didn’t even accomplish the assigned goal - that was thanks to a third party’s actions! Ironic. But that’s how life goes: ‘Man Proposes, God Disposes’.

So, what to do with the tech? The positive potential was still there, but no one could doubt anymore that there was a horrific dark side: they had just seen what it could do if misused, even if the authorities (as usual) were spinning the events as furiously as possible to avoid frightening the public. You could put it under heavy government control, and they did.

But what was to stop Nacirema’s rivals from copying the technology and using it domestically or as a weapon against Nacirema? In particular, Nacirema’s enormous furiously-industrializing rival far to the East in Asia, which aspired to regional hegemony, had a long history of being an “oriental despotism” and still had a repressive political system - ruled by an opaque corrupt oligarchy - which abrogated basic human rights such as free speech, and was not a little racist/xenophobic & angry at historical interference in its domestic affairs by Seilla & Nacirema…

The ‘arms race’ was obvious to anyone who thought about the issue. You had to obtain your own tech or be left in the dust. But an arms race was terrifyingly dangerous - one power with the tech was bad enough, but if there were two holders? A dozen? There was no reason to expect all the wishes to be benign once everyone had their own genie-in-a-bottle. It would not be hyperbolic to say that the fate of global civilization was at stake (even if there were survivors off-planet or in Hanson-style ‘disaster refuges’, they could hardly rebuild civilization on their own; not to mention that a lot of resources like hydrocarbons have already been depleted beyond the ability of a small primitive group to exploit) or maybe even the human race itself. If ever an x-risk was a clear and present danger, this was it.

Fortunately, the ‘hard take-off’ scenario did not come to pass, as each time it took years to double the power of the tech; nor was it something you could make in your bedroom, even if you knew the key insights (deducible by a grad student from published papers, as concerned agencies in Nacirema proved). Rather, the experts forecast a slower take-off, on a more human time-scale, where the technology escalated in power over the next two or three decades; importantly, they thought that the Eastern rival’s scientists would not be able to clone the technology for another decade or perhaps longer.

So one of the involved researchers - a bona fide world-renowned genius who had made signal contributions to the design of the computers and software involved and had the utmost credibility - made the obvious suggestion. Don’t let the arms race start. Don’t expose humanity to an unstable equilibrium of the sort which has collapsed many times in human history. Instead, Nacirema should boldly deliver an ultimatum to the rival: submit to examination and verification that they were not developing the tech, or be destroyed. Stop the contagion from spreading and root out the x-risk. Research in the area would be proscribed, as almost all of it was inherently dual-use.

Others disagreed, of course, with many alternative proposals: perhaps researchers could be trusted to self-regulate; or, related research could be regulated by a special UN agency; or the tech could be distributed to all major countries to reach an equilibrium immediately; or, treaties could be signed; or Nacirema could voluntarily abandon the technology, continue to do things the old-fashioned way, and lead by moral authority.

You might think that the politicians would do something, even if they ignored the genius: the prognostications of a few obscure researchers and of short stories published in science fiction had turned out to be truth; the dangers had been realized in practice, and there was no uncertainty about what a war with the tech would entail; the logic of the arms race has been well-documented by many instances to lead to instability and propel countries into war (consider the battleship arms race leading up to WWI); the proposer had impeccable credentials and deep domain-specific expertise and was far from alone in being deeply concerned about the issue; there were multiple years to cope with the crisis after fair warning had been given, so there was enough time; and so on. If the Nacireman political system were to ever be willing to take major action to prevent an x-risk, this would seem to be the ideal scenario. So did they?

Let's step back a bit. One might have faith in the political elites of this country. Surely given the years of warning as the tech became more sophisticated, people would see that this time really was different, this time it was the gravest threat humanity had faced, that the warnings of elite scientists of doomsday would be taken seriously; surely everyone would see the truth of proposition X, leading them to endorse Y and agree with the ‘extremists’ about policy decision Z (to condense our hopes into one formula); how can we doubt that policy-makers and research funders would begin to respond to the tech safety challenge? After all, we can point to some other instances where policymakers reached good outcomes for minor problems like CFC damages to the atmosphere.

So with all that in mind, in our little future world, did the Nacireman political system respond effectively?

I’m a bit cynical, so let’s say the answer was… No. Of course not. They did not follow his plan.

And it's not that they found a better plan, either. (Let's face it, any plan calling for more war has to be considered a last resort, even if you have a special new tech to help, and is likely to fail.) Nothing meaningful was done. "Man plans, God laughs." The trajectory of events was indistinguishable from bureaucratic inertia, self-serving behavior by various groups, and was the usual story. After all, what was in it for the politicians? Did such a strategy swell any corporation’s profits? Or offer scope for further taxation & regulation? Or could it be used to appeal to anyone’s emotion-driven ethics by playing on disgust or purity or in-group loyalty? The strategy had no constituency except those who were concerned by an abstract threat in the future (perhaps, as their opponents insinuated, they were neurotic ‘hawks’ hellbent on war). Besides, the Nacireman people were exhausted from long years of war in multiple foreign countries and a large domestic depression whose scars remained. Time passed.

Eventually the experts turned out to be wrong but in the worst possible way: the rival took half the time projected to develop their own tech, and the window of opportunity snapped shut. The arms race had begun, and humanity would tremble in fear, as it wondered if it would live out the century or the unthinkable happen.

Good luck, you people of the future! I wish you all the best, although I can’t be optimistic; if you survive, it will be by the skin of your teeth, and I suspect that due to hindsight bias and near-miss bias, you won’t even be able to appreciate how dire the situation was afterwards and will forget your peril or minimize the danger or reason that the tech couldn’t have been that dangerous since you survived - which would be a sad & pathetic coda indeed.

The End.

(Oh, I’m sorry. Did I write “70 years from now”? I meant: “70 years ago”. The technology is, of course, nuclear fission which had many potential applications in civilian economy - if nothing else, every sector benefits from electricity ‘too cheap to meter’; Nacirema is America & the eastern rival is Russia; the genius is John von Neumann, the SF stories were by Heinlein & Cartmill among others - the latter giving rise to the Astounding incident; and we all know how the Cold War led civilization to the brink of thermonuclear war. Why, did you think it was about something else?)

This was written for a planned essay on why computational complexity/diminishing returns doesn't imply AI will be safe, but who knows when I'll finish that, so I thought I'd post it separately.

New Comment
98 comments, sorted by Click to highlight new comments since: Today at 10:32 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So one of the involved researchers - a bona fide world-renowned genius who had made signal contributions to the design of the computers and software involved and had the utmost credibility - made the obvious suggestion. Don’t let the arms race start. ... Instead, Nacirema should boldly deliver an ultimatum to the rival: submit to examination and verification that they were not developing the tech, or be destroyed.

Damn those politicians! Damn their laziness and greed! If only they'd had the courage to take over the world, then everything would have been fine!

Don't misunderstand; that's what's being proposed here. Hegemony would not have been enough. You need inspectors in all the research institutions, experienced in the local language language and culture. You need air inspections of every place a pile might be constructed, quite challenging in 1945. You need to do these things not just to your rival, but to everyone who aspires to become your rival. You need your allies to comply, voluntarily or not. Whenever anyone challenges your reign openly, you have to be willing and able to destroy them utterly. You can't miss even once, because when you do you won't get nuclear ... (read more)

6gwern10y
It's not that difficult. Think about the flowchart of materials that go into atomic bombs. You don't need to control everyone everywhere. What you need to control are the raw uranium ore and derivatives, specialty goods useful for things like ultracentrifuges, monitor the rare specialists in shaped explosives and nuclear physics, sample the air for nuclear substances, and so on. There are many natural chokepoints and many steps are difficult or impossible under light surveillance: you need a lot of raw uranium ore, thermal diffusion purification requires comically much electricity, centrifuges emit characteristic vibrations, laser purification is impossible to develop without extensive experience, the USA and other nations already routinely do air sampling missions to monitor fallout from tests... I won't say that nuclear counterproliferation efforts have been perfect, but I will point out that a fair number of nations have had considerably difficulty getting their nuclear programs working (since he's come up already, how well was Saddam Hussein's nuclear program going when the issue was rendered moot by the US invasion?) and the successful members often have aid from previous members of the nuclear club & no serious interference in the form of embargoes much less active monitoring and threats from a jealous existing nuclear club member. You're right, because clearly the status quo is totally a solution 'forever'. Retrospective determinism, eh? 'Because X did not happen, it was inevitable that X would not happen, therefore, inertia was the right choice.' Nor is winning a lottery ticket an argument in favor of playing the lottery. (Not to mention that if inertia had been the wrong choice, we wouldn't be here arguing about it and so one could justify any policy whatsoever. Reasoning that 'we did Y and we survived! so Y must be a great policy' is not a good way to try to analyze the world.)
7fezziwig10y
I'd like to address your other points, but I think we have to talk about your last paragraph first. You're quite right; that the cold war did not end the world in our particular branch is not proof that the cold war was survivable in more than a tiny handful of possible worlds. But let me remind you in turn that "von Neumann's plan would have been worse than the cold war" is not the same as "the cold war was safe", "the cold war was good", "the cold war doesn't share any of the weaknesses of von Neumann's plan", or even "the cold war was terrible but still the best choice we had". I'm arguing only that narrow thing: that our forefathers were right to reject von Neumann's plan. Fair enough?
5gwern10y
Fair enough, but a lot of the objections here seem to be based on the argument that 'the Cold War was reasonably objectively safe (and we know so for [anthropicly biased reasons]), while unilateral strikes or ultimatums are objectively dangerous; hence the Cold War was the better choice', while I think the right version is 'the Cold War was objectively extremely dangerous, while unilateral strikes or ultimatums are [merely] objectively dangerous; hence the Cold War was the worse choice'. I don't think people are directly comparing the scenarios and merely making a relative judgment.
0fezziwig10y
(Though for what it's worth, I actually do agree with your point about AI, insofar as the analogy holds: we could get into a Cold-War-like situation and humanity would probably not enjoy the result. I just don't think world conquest is the answer.)

This was pretty transparent. And I disagree with it.

I'd observe that though the peculiarities of the cold war actually made nuclear peace tougher than it would have been in most time periods, but we still made it through, but you can see that the current multi-polar world is substantially safer even though there are more countries with nuclear weapons than ever before.

Also, trying the Neumann plan (or your description of it) would have been awful, and would almost certainly have triggered conventional war if it had been followed through on; not only that but the USSR's development of the bomb did not mark the "closing of the window"; the US had more bombs and superior bomb delivery capabilities for several years after both countries had the bomb, and the US still didn't go to war. And even in retrospect that looks like the right choice, since war would have devastated Europe and eventually resulted in a "parable" essay about how the super-weapon motivated its developing nation to bloodily enforce global hegemony.

5James_Miller10y
I think the opposite, as it would have probably prevented the Korean War. After the U.S. developed hydrogen bombs, John von Neumann helped create the U.S. military strategy that in any war with the Soviet Union we would seek to kill their leaders. Had, at the end of WWII, von Neumann given an atomic ultimatum to Stalin it would likely have included the threat that if Stalin didn't comply we would do everything possible to kill him. Given that Stalin's primary concern was the welfare of Stalin, this would probably have been enough to have gotten Stalin to officially comply, although he certainly would have cheated if he could have gotten away with it, but with a big enough inspection effort this would have been unproductive.
2Punoxysm10y
The US threatened to depose Saddam Hussein unless he allowed inspections. And obviously his own self-interested motives led him to allowing those inspections and a verifiably disarmed Iraq has pretty much not been a foreign policy issue since. So I guess you're right!

I don't think you understand what happened. Saddam thought that his former close sponsor & ally needed him against Iran because without our Sunni man in Iraq and the fear of WMDs the country would become a Shi'a & Persian pawn*. (You remember the whole Iran-Iraq War and 'exporting the revolution' and Hezbollah, right?)

Huh. How about that. Why, it looks like that's what happened under Maliki and that's why the country is currently being torn apart and the Iraqi government is inviting Iranian troops in to help restore order.

It would seem Saddam's mistake was in thinking the USA was run by rational actors, and not run by morons who would sabotage their geopolitical interests in the interests of revenge against a "guy that tried to kill my dad at one time". As my parable points out, one should not expect that sort of rational planning from the USA or indeed large countries in general.

So no, I think your objection does not hold water once one actually knows why the inspections were refused, and does not apply to the hypothetical involving Stalin.

* EDIT: BTW, I will note that this is a classic example of failing to apply the principle of charity, demonizing enemies, and... (read more)

4Punoxysm10y
I think you actually illustrate how correct I am. When there's uncertainty about how sincere a threat is, especially because virtually all threats of military action are negative value for both parties if executed, and when the threat sets a precedent that the threatening party could continually impose its will, it's natural to test the threatening parties commitment. All you're saying is that Saddam called the USA's bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia. Think about it further: you are threatened by a nation with a newly developed super-weapon, but only modest stockpiles and uncertain ability to deliver it, to not develop your own version of the super-weapon. The demand is that you submit to thorough inspections, which your enemy would certainly use to spy as extensively as possible on you. Not to mention that it would set a precedent where you'd have to back down for the next demand, and the next; anythings' better than being a smear of ash after all, isn't it? Or you could consider your excellent military position right next to your enemy's allies, along with the amount of safety provided by a combination of secrecy and bunkers, and decide that the best move - the only move if you want to resist the slide towards subjugation - is to call your enemy's bluff.
1gwern10y
Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals. I'm amazed. You present an example which you think is a great example of irrationality on a dictator's part, I show you are wrong and have no idea why Saddam resisted and you think you can spin it to support your claims and it actually illustrates how correct you are! What could possibly falsify your criticisms? How well did that work in the Cold War against non-nuclear nations..? Everyone understands the logic of blackmail and the point of using Schelling fences to avoid sliding down the slippery slope. And likely lose, with no superweapon of your own, no prospect of developing it soon under the chaos of war (it was already a top-priority program for Stalin, war could only have delayed it), being cutoff from one of one's most important trading partners which kept one from economic collapse during WWII, and a self-centered psychopath like Stalin would have to worry about how well bunkers would really protect him against weapons they have no direct experience with and which were increasing in tonnage each year.
0ThisSpaceAvailable10y
Do you mean "Iraq", rather than "USSR"? I don't think Punoxysm is saying that it's an example of irrationality. Punoxysm is saying that it's a reasonable reaction, and it shows that calling the bluff would also be a reasonable response in Stalin's case. You haven't shown that Punoxysm is wrong, you've argued that Punoxysm is wrong. I think the answer to that is rather obvious. If Hussein had allowed the inspections, that would support your position. It's rather odd to be calling someone's position unfalsifiable, simply because they are not accepting your explanations for why evidence falsifying your position is unpersuasive.
-2gwern10y
No, I meant USSR. Iraq was in a special position of being both a former close US ally and still in the valuable-to-the-US geopolitical position which made it an ally in the first place, and that is why Saddam engaged in the reasoning he did. The USSR was a former close US ally, yes, but played no such valuable role and both recognized each other as their principal threat after the Nazis were defeated. I don't know how I can point out he's wrong any more clearly. Saddam had good reason to think the threats were bluffs. Stalin would not have because those reasons did not apply to the USSR. The situations are not the same. Yes, but we already know he didn't. So the question is his motivations; Punoxysm has asserted that if he did it for irrational reasons, then it supports his criticism, and when I pointed out that he did it for rational reasons, he then did it supported his position! So why did he not say in the first place simply, 'Saddam didn't allow inspections; this is evidence the strategy cannot work'? Obviously, because he felt the irrational qualifier was necessary right up until I produced the references. (It is a basic principal of natural language that you do not use unnecessary restrictions or qualifiers when they are not relevant.)
0ThisSpaceAvailable10y
So, just to be clear: you believe that in the hypothetical world in which the US threatens to attack the USSR if it does not allow inspections, the USSR would have no reason to think this serves a useful purpose, and would be therefore justified in concluding it was a bluff? You are saying that there are reasons for thinking it was a bluff that did not apply to the USSR. That's denying the antecedent. It's not clear to me what you're referring to.
-4gwern10y
No, that's like, the opposite of what I mean. I'm baffled you could not understand this (and similarly that you had to ask for clarification about my BMR example in the other comment when I had said clearly that a statement would be evidence against an impending bust). If this is the best you can read what I've written, then I think maybe it's time for me to call this conversation quits. I don't know if you're being deliberately obtuse or think too differently, but either way... Good thing we're not using deductive logic! Denying the antecedent is, like almost all classical fallacies, a useful Bayesian piece of evidence. By removing one potential way for it to be a bluff, the probability of being a bluff necessarily falls; by removing the antecedent, the consequent is that much less likely. 'He' is Saddam. Obviously. That was how the comment thread started and what I was objecting to and I even name Saddam in the same paragraph you claim to be confused by! ---------------------------------------- EDIT: looking back through your comments, you seem to consistently and repeatedly misunderstand what I said and ignore parts where I explained clearly what I meant, in a way well beyond an ordinarily obtuse commenter. I now think you're doing this deliberately, and so I'm going to stop now.
0ThisSpaceAvailable10y
I read "it" in "it served a useful role" as referring to "demanding inspections". And I took "which meant the threats were bluffs" to mean "in the hypothetical involving the USSR, the threats were bluffs". Because the prior clause had clearly established that you were talking about the USSR. Maybe instead of accusing me of bad faith, you could actually try to clear up the confusion. I'll be downvoting your posts until you do. It would be nice if you could write your sentence with correct and clear grammar, especially when dealing with complex compound sentences, and if you can't be bothered to do so, then don't complain about people having trouble parsing your sentences. When there's a failure in communication, attributing all of the blame to the other person is a very anti-rationalist position to take. You didn't merely say that the probability is lower; you presented it as a logical certainty. You said that Punoxysm asserted that Saddam did it for irrational reasons. I don't think that it is entirely clear as to what statement by Punoxysm you consider to be the making that assertion. If I had been unclear about who you were talking about, I would have said who, rather than what.
-6ThisSpaceAvailable10y
3[anonymous]10y
In a world where a possibly-irrational actor is using "do what we tell you or you get nuked" as an instrument of foreign policy against a load of other possibly-irrational actors, how long would it be before something went horribly wrong?
7gwern10y
Is a world in which only one possibly-irrational actor has nukes and can make threats more likely to go wrong or upon going wrong go horribly wrong, than a world in which dozens of possibly-irrational actors have nukes and can make threats?
1solipsist10y
I don't think this is a terribly strong reply. Saddam Hussein's hypothesis of US policy towards him was mistaken. Perhaps his hypothesis was based on a solid conceptual framework about the US acting in its own long-term self-interest. But we've tested Hussein's hypothesis, it was mistaken, and Saddam died.
2gwern10y
It is a strong reply. What the Saddam example shows is that ultimatum givers can be irrational; but that's not what you need to show to show that a USA ultimatum to the USSR would have failed! You need to show that the USSR would have been irrational. The Saddam example doesn't show that. It shows that 'crazy' totalitarian dictators can actually be more rational than liberal Western democracies, which is what is needed for proposed plan to work. That's why I say that the Saddam example supports the proposed plan, it doesn't undermine it: it establishes the sanity of the only actor who matters once the plan has been put into action - the person receiving the ultimatum.
1solipsist10y
So, all else held constant: if Saddam Hussein had capitulated to US demands and (counterfactually) did not rebuff inspectors, you would count that as evidence against the proposed plan? ETA In the interests of positive feedback -- I like the overall post, and I'm just picking on this individual comment.
7gwern10y
No, that would be evidence for it. I know you are trying to show I am having it both ways, but I am not. Think of the full tree of possibilities: ultimatum/no-ultimatum, bluff/real, rational-refusal/irrational-refusal. If real ultimatum had been issued and Saddam had then refused for irrational reasons, that would be strong evidence against the plan, because that's the situation which is predicted to go well. And that's the situation Punoxysm thought he'd found, but he hadn't. (Actually, you're the second person today to think I was doing something like that. I mentioned on IRC I had correctly predicted to Gawker in late 2013 that the black-market BMR would soon be busted by law enforcement - as most of its employees would be within two months or so while setting up the successor Utopia black-market - mentioning that among other warning signs, BMR had never mentioned detecting attempts by law enforcement to infiltrate it; someone quoted at me that surely 'absence of evidence is evidence of absence'? Surely if BMR had claimed to be seeing law enforcement infiltration I would consider that evidence for infiltration, so how could I turn around and argue that lack of BMR claims was also evidence for infiltration? Yes, this is a good criticism - in a binary context. But this was more complex than a binary observation: there were at least 3 possibilities. 1. it could be that law enforcement was not trying to infiltrate at all, 2. it could be they were trying & had failed, or 3. it could be that they were trying & succeeded. BMR's silence was evidence they didn't spot any attacks, so this is evidence that law enforcement was not trying, but it was also evidence for the other proposition that law enforcement was trying & succeeding; a priori, the former was massively improbable because BMR was old and notorious and it's inconceivable LE was not actively trying to bust it, while the latter was quite probable & had just been done to Silk Road. Hence, observing BMR silence p
1Benya10y
Incidentally, the same argument also applies to Governor Earl Warren's statement quoted in Absence of evidence is evidence of absence: He can be seen as arguing that there are at least three possibilities, (1) there is no fifth column, (2) there is a fifth column and it supposed to do sabotage independent from an invasion, (3) there is a fifth column and it is supposed to aid a Japanese invasion of the West Coast. In case (2), you would expect to have seen sabotage; in case (1) and (3), you wouldn't, because if the fifth column were known to exist by the time of the invasion, it would be much less effective. Thus, while observing no sabotage is evidence against the fifth column existing, it is evidence in favor of a fifth column existing and being intended to support an invasion. I recently heard Eliezer claim that this was giving Warren too much credit when someone was pointing out an interpretation similar to this, but I'm pretty sure this argument was represented in Warren's brain (if not in explicit words) when he made his statement, even if it's pretty plausible that his choice of words was influenced by making it sound as if the absence of sabotage was actually supporting the contention that there was a fifth column. In particular, Warren doesn't say that the lack of subversive activity convinces him that there is a fifth column, he says that it convinces him "that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed". Moreover, in the full transcript, he claims that there are reasons to think (1) very unlikely, namely that, he alleges, the Axis powers all use them everywhere else: I.e., he claims that (1) would be very unique given the Axis powers' behavior elsewhere. On the other hand, he suggests that (3) fits a pattern of surprise attacks: And later, he explicitly argues that you wouldn't expect to have seen sabotage in case (3): So he has the pieces there for a correct Bayesian argument that a
-2ThisSpaceAvailable10y
So, if BMR had claimed to be seeing infiltration, would you consider that evidence that BMR is not about to be busted?
0gwern10y
Yes. If a big market one expects to be under attack reports fending off attack, then one would be more optimistic about it: (That said, that only applies to the one particular kind of observation/argument from silence; as I told Chen, there were several reasons to expect BMR to be short-lived on top of the general short-livedness of black-markets, but I think the logic behind those other reasons doesn't need to be explained since they're not tricky or counterintuitive like the argument from silence.)
-2ThisSpaceAvailable10y
Then it seems to me that when responding to "Surely if BMR had claimed to be seeing law enforcement infiltration I would consider that evidence for infiltration, so how could I turn around and argue that lack of BMR claims was also evidence for infiltration?", you should lead off with "I would consider that evidence for infiltration, but against an imminent bust", before launching into all the explanation. That way, it would more clear whether you are denying the premise ("you'd consider that evidence for your thesis, too"), rather than just the conclusion. And the phrase "If a big market one expects" would a lot clearer with "that" between "market" and "one".
-1ThisSpaceAvailable10y
If the US had been able to credibly pre-commit to the invasion if inspections were not allowed, then that pre-commitment would not be foolish. And once they had attempted such a pre-commitment, not following through would have harmed their ability to make pre-commitments in the future. A willingness to incur losses to punish others is a vital part of diplomacy. If that's "irrational", you have a very narrow view of rationality, and your version of "rationality" will be absolutely crushed in pretty much any negotiation. So, if I'm following correctly, your position was that the US was foolish for following through ... and Hussein was foolish for not realizing they would follow through. So if everyone is foolish, how can you argue that because X would be in hypothetical Stalin's best interests, it somehow follows that he would do X? Maybe it's the late hour, but I'm having trouble seeing how "The other guy may decide we're bluffing and call us on it" does not apply to hypothetical Stalin.
1gwern10y
A willingness to incur losses is a useful part - if you are seeking useful goals. I may well want to follow through on a threat in order to preserve my credibility for future threats, but if I choose to make threats for stupid self-defeating goals, then precommitting is a horrible irrational thing which destroys me. The USA would have been much better off not invading Iraq and losing some credibility, because the invasion of Iraq would have predictably disastrous consequences for both the USA and Iraq which were far worse than the loss of credibility. The first rule of strategy: don't pursue stupid goals. If you think that you can pursue any goal unrelated to what you actually want, then you have a very narrow view of rationality and your version of rationality will be absolutely crushed in pretty much any negotiation. You do not want to be able to precommit to shooting yourself in the foot. The US was foolish for issuing threats to achieve a goal that harmed its actual interests, Saddam was mistaken but reasoning correctly in treating it as a bluff, and the US was even more foolish to carry through on the threat. Because in that scenario, Stalin would not be thinking the USA is doing something so stupid it must be a bluff, because it wouldn't be so stupid it is probably a bluff.
-5ThisSpaceAvailable10y

the genius is John von Neumann

Historical note: According to Prisoner's Dilemma By William Poundstone, von Neumann didn't suggest issuing a nuclear ultimatum but instead advocated a surprise first strike against the Soviet Union. Bertrand Russell did suggest a nuclear ultimatum but with the goal of establishing a world government rather than just non-proliferation.

In a previous related discussion, I noted that I I Rabi and Enrico Fermi did propose using the threat of nuclear attack to deter the development of fusion weapons. However in my online searches, I haven't found any prominent historical figures suggesting the exact thing that you (and I in that previous thread) are suggesting here, of using a nuclear threat just to prevent the proliferation of fission weapons, which is kind of curious...

Nacirema huh? I feel stupid now.

It's a classic! You might enjoy some of the other articles/parables about the Nacirema: https://en.wikipedia.org/wiki/Nacirema

2Sabiola10y
So that's where the N comes from! I was wondering why it was Nacirema instead of Acirema.
8gwern10y
I think at least part of it is that 'Acirema' is way more recognizable as a variant on 'America' than is 'Nacirema'. The 'N' is a major decoy, while 'Acirema' has the same word shape ("The theory holds that a novel bouma shape created by changing the lower-case letters to upper-case hinders a person’s recall ability.")

Well, in reality Americans understood that Stalin would never agree to such plan, so it means war. They did not have enough nukes for guaranteed victory (few cities were acceptable losses for USSR), did not have any reliable information about Soviet nuclear research, and knew how badly war with Russia can end.

7James_Miller10y
Remember the firebombings of Dresden and Tokyo. At the time, the U.S. didn't need nuclear weapons to inflict mass damage by air on cities. And since the Soviets would not have been able to have a concentrated mass tank force (since it would have been nuked) our tank forces would have been unstoppable.
3Punoxysm10y
The US took a long time to establish the air superiority necessary to execute those firebombings; and tactical nuclear weapons would have been available in very low numbers and difficult to deploy effectively [anything you nuke, your own forces can't pass; not even because of radiation but because of infrastructure destruction] (and certainly the Soviets could have adapted; they outnumbered Allied forces substantially in Europe). But forget all that; how is a bloody, brutal war immediately after WWII to subjugate the Soviets preferable to the Cold War as it happened? Would it have reduced x-risk in the long term? I doubt that; how long could the US have monopolized nuclear weapons, especially if it immediately used them as threats that would terrify and antagonize every other nation in the world?
1James_Miller10y
Let's assume the many worlds hypothesis is correct and consider all of the branches of the multiverse that share our 1946. In how many of them did the cold war turn hot? For what percentage would it have been better to make the threat? Also, a world in which just the United States has atomic weapons would have many additional benefits such as probably higher world economic growth rates because of lower defense spending.
4Punoxysm10y
Once we get into talking about alternate histories, our ability to have an evidence-based discussion pretty much goes out the window. I'll say the following: 1) The cold war as we know it did come "close" in some sense to going hot; that's bad, that's x-risk in action 2) All things considered, the last 70 years as they actually happened went a hell of a lot better than the 70 years before, just on a political and military basis alone (so disregarding technology). 3) Ultimatums meant to monopolize the atomic bomb make sense if the goal is enacting a US-led One-World-Government, even if you believe WWIII would have broken out after ultimatums somehow fail to lead to peace 4) I DO believe WWIII would have broken out 5) I believe an attempted One-World-Government or other extreme attempt at global hegemony by the US would have been a disaster even without a USA-USSR WWIII.
3Kaj_Sotala10y
Given that a massive amount of quantum-scale randomness would have to go systematically in a different direction for it to have any noticeable macro-scale effect, and that even then most macro-scale effects would be barely even noticeable, isn't the default answer to questions like this always "in the overwhelming majority of branches, history never noticeably diverged from ours"?
2James_Miller10y
Wouldn't quantum effects have some influence on who gets cancer from background radiation, and wouldn't the impact of this ripple in a chaotic way throughout the world so that, say, Petrov isn't the one on duty on 9/26/1983?
3Baughn10y
Absolutely. Human minds are a lot more stable than they feel - a decision that feels "close; 60/40" would still fall on the 60% side >>60% of the time - but chaos will quickly bubble up through other channels.
0Lalartu10y
Compared to destruction done by German forces, American strategic bombing would have been just an annoyance. Also, USA would be unable to achieve air superiority, and their bombers would suffer heavy losses. Using nukes against even heavily concentrated tanks (~50 tanks per kilometer of frontline, as in major tank battles) is just a waste of nukes. In a clash between Soviet and American tank forces Americans would have been curbstomped.
1James_Miller10y
No, at the very least we would have been able to attack Soviet cities from bases in China and Japan that the Germans couldn't hit. I'm not sure about this since the goal would be to create a hole for your tanks to exploit so you could encircle the enemy.
2Lalartu10y
No, main Soviet industrial centers were far beyond the range of any bombers, from Europe or from China. Also, bombing a city does far less to reduce military production than capturing it (look at figures for Germany 1944-1945). So you suggest using American nukes ( dozen in 1946), dropping them on Soviet tanks, from strategic bombers that hardly can hit a target smaller that city, to gain a modest tactical advantage (bringing two battalions of tank destroyers will have the same effect )? Using such brilliant plans USA would have surely lost WWIII.
0James_Miller10y
I'm far from an expert on tank battles, but my impression is that what you really want to do is encircle the enemy tanks to cut them off from supplies. Being able to punch a small hole in enemy defenses would be extremely helpful. My impression was also that strategic bombers had difficulty hitting targets because of interference from anti air defenses and enemy aircraft, and this wouldn't have been a problem when attacking targets in the field under conditions under which the U.S. had air superiority.
3Lalartu10y
Encirclement operation works on much bigger scale, "small hole" here is tens of kilometers wide, through a defence line that is also tens of kilometers in depth. Using nukes against tanks makes no sense unless numbers of nukes and tanks are comparable. Poor accuracy of strategic bombing was because of high altitude. On low altitude these bombers are very easy targets for anti-aircraft artillery (Soviet divisions had lots of it), and dropping nuke is a suicide mission.

I'm not sure whether you intended this, but I suppose "70 years from now" does mean "70 years ago", in an extremely literal, unidiomatic sense of "from".

2gwern10y
That was actually pointed out to me after I finished the draft, but I decided to leave it be. I think it sounds better, and the unintentional double-meaning is nice.

deducible by a grad student from published papers, as concerned agencies in Nacirema proved

Probably because I've read a few accounts of a grad student doing this, I realized what you were doing by this sentence.

I have to say that nuclear warfare was less of a human extinction risk than some people tend to think or is directly suggested by this text. Even a straight all out war between the United States and Soviet Union using their full arsenals would not have caused human extinction nor likely have prevented some technological societies from rebuilding if they didn't outright survive. I've seen out there expert analyses on raw destruction and on factors like subsequent global climate devastation showing this conclusion from any plausible military contigencies a... (read more)

9gwern10y
The only one I've personally read is Herman Kahn's On Thermonuclear War, which oversimplifies a lot and generally tries to paint matters as optimistically as possible; as well, people from that era like Samuel Cohen in his memoirs describe Kahn as willing to fudge numbers to make their scenarios look better. Personally, I am not optimistic. Remember the formulation of existential risk: not just extinction, but also the permanent curtailment of human potential. So if industrialized civilization collapsed permanently, that would be a serious x-risk almost up there with extinction itself. I agree that I don't think nuclear war is likely to immediately cause human extinction, but if it destroys industrialized civilization, then it's setting us up to actually be wiped out over the coming millennia or millions of years by a fluke pandemic or asteroid or any of the usual natural x-risks. Coal, oil, surface metals, and many other resources are effectively impossible to extract with low-tech levels like say the 1800s. (Imagine trying to frack or deepsea mine or extract tar with 1800s metallurgy or anything.) Historically, we see that entire continents can go for millennia on end with little meaningful change economically; much of Africa might as well not be in the same world, for all the good progress has done it. Intellectual traditions and scholarship can become corrupted into meaningless repetition of sacred literature (how much genuine innovation took place in China from AD 0 to AD 1800, compared to its wealth and large intelligentsia? why do all acupuncture trials 'succeed' in China and Japan when it's shown to be worthless placebo in Western trials?) We still don't know why the Industrial & Scientific Revolutions took place in Western Europe starting around the 1500s, when there had been urbanized civilizations for millennia and China in every way looked better, so how could we be confident that if humanity were reduced to the Dark Ages we'd quickly recover? Brief re
1solaire10y
It feels like we have talked past each other given this and responses to other comments. I do not think this really addressed a core misconception shaping the debate or a best a contradiction of historical expert analysis. Would you call it "industrial collapse" if, following a full scale nuclear war, present day Australia was still standing a month later with little military destruction nor human casualties? I am not directly an expert in the field and climate science in particular has advanced a lot compared to historical research, on all topics not just nuclear winter, but I have read some different authors. Also to the point, the sheer volume of expert work characterized at best by conflicting opinions should you accept the most pessimistic nuclear warfare predictions is worth considering. Sagan and Turco and others repeatedly collaborated on several high profile works and the state of expert science I think could be accurately said to be considered to have advanced over time. See for example: http://www.atmos.washington.edu/~ackerman/Articles/Turco_Nuclear_Winter_90.pdf This particular paper doesn't discuss, say, military strategy other than very broad consensus, eg. both sides would favor Northern Hemisphere targets, though see a ton of cited and other sources. Even conditionally overcoming, for the purpose of hypothetical consideration, the lower prior probability of certain full scale military conflicts, direct, targeted destruction of more than about 20% of the world population as a military and strategic outcome just wasn't feasible, ever. This as a popular misconception might be readily dismissed by those of us here, but recognize that large amounts of past research was on fully trying to understand, admittedly we still don't completely, subsequent climate and ecological effects. The latter are the only real x-risk concern from a technological and natural science standpoint. A few degrees Celsius of temperature change globally and other havoc is not n
2gwern9y
I assume you mean here if Australia escaped any direct attack? Sure. The lesson of I Am A Pencil - no one person (or country) knows how to make a pencil. Australia is heavily integrated into the world economy: to caricature, they mine iron for China, and in exchange they get everything else. Can Australia make an Intel chip fab using only on-island resources? Could it even maintain such a chip fab? Can Australia replace the pharmaceutical factories of the USA and Switzerland using only on-island resources? Where do the trained specialists and rare elements come from? Consider the Great Depression: did Australia escape it? If it cannot escape a simple economic slowdown because it is so highly intertwined, it is not going to escape the disruption and substantial destruction of almost the entire scientific-industrial-technological complex of the Western world. Australia would immediately be thrown into dire poverty and its advanced capabilities will begin decaying. Whether Australia becomes a new Tanzania of technology loss will depend on how badly mauled the rest of the world is, though, I would guess. An instantaneous loss of 10-20% of population and destruction of major urban centers is pretty much unprecedented. The few examples I can think of similar levels of population loss, like the Mongols & Iran or the Spanish & New World, are not promising. But none of those countries were responsible for the Industrial or Scientific Revolutions. Humanity would survive... much as it always has. That's the problem. I've read this paragraph 3 times and I still don't know what you're talking about. You're being way too vague about what experts or what predictions you're talking about or what you're responding to or how it connects to your claims about Australia.
0Luke_A_Somers10y
Well, yes... It persisted until 1453. Rome wasn't the center of the Roman Empire since around 330. The big idea that allows modern civilization is that you don't worship the knowledge, you go out and test it... that's the main thing, and that would persist easily. Knowing about germs is another biggie. That sort of stuff is spread very widely (if thinly), and could allow a rebound. But to get that rebound, it needs to have been there in the first place. Mere users of civilization who haven't become modern themselves would not get this boost. (see: all case-studies in Collapse, if I'm not mistaken) Also, the last I checked on acupuncture, the placement is unimportant, but getting stuck with needles does help with pain. So they're continuing doing something that works, but they haven't removed unimportant details.
8gwern10y
Either way you want to count, from the first Roman conquests to the fall of the West or from the fall of the West to the fall of the East. Both get you periods comparable to or longer than the entire history of the Industrial & Scientific Revolutions so far. I don't think 'testing things' is as easy or trivial as you think. It's very easy to 'test' something and get exactly the result you want. Or get a result which means nothing. Cargo cult science & thinking is the default, not the aberration. When science goes bad, it doesn't look like 'we've decided we aren't going to do Science anymore', it looks like this: http://lesswrong.com/lw/kfb/open_thread_30_june_2014_6_july_2014/b1u3 (To use an example from yesterday.) It looks like science, everyone involved thinks it's science, it passes all the traditional rituals like peer review and statistical tests, but it means next to nothing. The same way millennia of theologians or magicians or alchemists think they're doing something useful and acquiring knowledge. Once the culture of real science is lost, I'm not sure it has a good chance of surviving. How well did the spirit of Greek philosophy survive the Roman Empire? Yes, eventually it came back, but there's anthropic bias there (we probably wouldn't be discussing science if Greek philosophy & logic hadn't survived somehow), and consider the chancy transmission of a lot of it from Greek to Arabic back to Europe. Will degenerate almost immediately into classic taboos and miasmas and evil spirits. How well does folk understanding of antibiotics accord with reality? When people routinely discontinue antibiotic treatment because they feel better, are they exhibiting an understanding of germ theory? Or consider the popularity of anti-vax among the most highly educated as we speak... I was under the impression that sham acupuncture generally performed comparable to 'real' acupuncture: https://en.wikipedia.org/wiki/Acupuncture
0Luke_A_Somers10y
RIght. But real science is widespread. There are research universities in Lesotho, and I've met professors from there and they know how science works. They've done it, and continue to do it. Exactly. The sham acupuncture still involves poking people with needles! It's just not aligned.
6gwern10y
And how much of the culture of science has spread through Lesotho? Or would survive the university being shut down? Or survive a single charismatic professor leaving and being replaced by a corrupt leader who demands publishable results? The question isn't whether Science exists in the world, but to what extent it's a delicate flower that lives in a greenhouse and will quickly die or become a shambling parody of itself when conditions change, and whether it can survive something like the collapse of civilization. It does? I thought sham acupuncture involved either needle-less approaches or trick needles where it pokes the patient but retracts rather than breaks the skin.
0Luke_A_Somers10y
Pt 1: I don't know. The core of science is not so very complicated. Empiricism plus skepticism plus math. The hardest part of that is math, and of the three that is the most easily transmitted by book. Of the rest, that's a bit of sociology I can't judge. Lesotho isn't what I'm holding up as 'the most likely source of a rebound in the event of nuclear war' - it's an example of the spread of real science. Pt 2: Sometimes... but even if acupuncture is a really reliable way of inducing a strong placebo effect on people even if they know it 'does nothing', that's useful.
3gwern10y
Then why did it take so long? The tradition of math is the most ancient & universal of the 3 parts you mention. Most regions of the world develop math, sometimes to fairly high levels like in India or China. Is that consistent with it being 'the hardest part'? In contrast, empiricism and skepticism are typically marginal and unpopular on the rare occasions they show up; the Greek Skeptics were one of the more minor traditions, the Carvaka of India were some heretics known from like one surviving text from the early BCs and were never a viable force, and offhand I don't even know of any Chinese philosophical tradition which could reasonably be described as either 'empirical' or 'skeptical'. It's also not what they think they're measuring. Still diseased.
0Luke_A_Somers10y
Because it's psychologically hard and unintuitive, not because it's complicated. Math is complicated and difficult, but it's not psychologically challenging like 'do your best to destroy your own clever explanations and cheer if someone else does'. Acupuncture makes a great example. Here we have folks who are on to something that works. Yay! Case closed. ... except, not. Because they don't have the idea of science, the hard and unintuitive thing that says you should try to find all the times that that thing you rely on doesn't work, they can't find those boundaries.
0gwern9y
...and if science is psychologically hard & unintuitive, all the easier for it to be substituted for something superficially similar but ineffective. And how does that not make science harder than math?
0Luke_A_Somers9y
Skepticism and empiricism are robust ideas, by which I mean there's nothing particularly similar to them. They are also very compact. You can fit them on a post-card. On the other hand, math is this enormous edifice. The 'getting it wrong' that you see all over modern science is a failure, yes, but most of these scientist-failures are failing due to contingent local factors like conflicts of interest and grant proposals and muddy results and competition pressures... they're failing to fulfill the scientific ideal for sure, but it's not because they lack the scientific ideal. They can correctly teach science. If bad scientists were all we had, then science would have bad habits and that would be bad, but it could be solved much more easily than having to redesign the thing without knowing that it was possible, like we did the first time. This is still the case even if all the scientists are under the thumbs of warlords who make them do stupid stuff. The idea is there, the light can spread. Not right away, likely, but we won't need to wait thousands of years for it to re-emerge.

The analogy seems pretty nice. The argument seems to be that, based on the historical record, we're doomed to collective inaction in the face of even extraordinarily dangerous risks. I agree that the case of nukes does provide some evidence for this.

I think you paint things a little too grimly, though. We have done at least a little bit to try to mitigate the risks of this particular technology: there are ongoing efforts to prevent proliferation of nuclear weapons and reduce nuclear stockpiles. And maybe a greater risk really would provoke a more serious response.

Your post prompted me to recall what I read in Military Nanotechnology: Potential Applications and Preventive Arms Control by Jürgen Altmann. It deals mostly with non-molecular nanotech we can expect to see in the next 5-20 years (or already, as it was published in 2006), but it does go over molecular nanotech and it's worth thinking about the commonly mentioned x-risk of a universal molecular assembler in addition to AGI for the elites to handle over the next 70 years.

I think as a small counter to the pessimistic outlook the parable gives, it's worth reme... (read more)

[-][anonymous]10y00

every sector benefits from electricity ‘too cheap to meter’

Except that that was never actually in the cards...

Why, did you think it was about something else?

I patternmatched the first half to eugenics.

Well, "impoverished foreign country" doesn't match well to Nazi Germany, but everything else checks out.

0knb10y
I'd be interested to hear how eugenics could kill hundreds of thousands of people in days.

Good story! The bit about Nacirema's rival being 'to the east' and an 'orietal despotism' and 'angry at historical interference in its domestic affairs by Seilla & Nacirema' gave the game away a little early though.

5gwern10y
Really? The wording was supposed to make it look like China. What made you think USSR? I thought relatively few people knew about Western involvement in the Russian Revolution compared to stuff like the Opium Wars etc.
0Larks10y
Ahhh, maybe. I used that, plus the fact that the US probably net aided the Reds in China during their civil war.
0Luke_A_Somers10y
I figured at first that you were blending multiple real-life narratives. But when I saw 'Seilla', that told me 'It's Nukes'. Or rather, 'from now on, it's probably going to be nukes'. After all, you had very early on That's simply not how it went 70 years ago.
4gwern10y
Ah. Hm... I wasn't even referring to the WWII Allies there, I was referring to Western involvement (particularly the USA & Triple Entente) in the Russian Revolution supporting the Whites. Perhaps I could use "Nacirema and Threefold Understanding"? Yes, it is. The decision was indeed hurried. They were early prototypes. The use was outside the lab. The country was indeed a foreign country. The country was impoverished by our standards, and impoverished by the standards of the day before and definitely impoverished after years of embargo, war, fire-bombing, wasted military expenditures, and destruction of the labor force. The Axis countries were poor even before the war. (If you are under the mistaken impression that Germany, for example, was 'rich' or 'heavily industrialized' or even comparable to England or America per capita, I suggest you read Tooze's The Wages of Destruction.)
0Luke_A_Somers10y
One teeensy problem. The technology you are referring to here is not the same technology that had been alluded to in the previous paragraph.
1gwern10y
The research that went into the bomb helped a lot with reactors. They're both nuclear fission.
[-][anonymous]10y-10

Cute, except nuclear technology and AGI could not be more different.

4gwern10y
Feel free to elaborate on that. As I point out in this parable, nuclear tech looks eerily like slow takeoff and arms race scenarios once you delete the names, and elites failed to deal with it in any meaningful way other than accelerating the arms race & hoping we'd all survive.
3[anonymous]10y
Well let's see: 1) AGI doesn't require obscure, hard-to-process materials that can be physically controlled. 2) AGI is software and therefore trivially copyable -- you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware. 3) AGI is merely instrumental to weaponization in the high-probability risk scenarios. It's a low-cost Manhattan project in a box. A pariah country would use an AGI to direct their nuclear bomb project for example, not actually "deploy an AGI in the field", whatever that means. So there's a meta-level difference here: whereas peaceful nuclear technology actually generates weapons grade material as a side-product, AGI itself doesn't carry that risk. 4) It's hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives? It's hard to criticize specifically what you had in mind without knowing what you had in mind, other than that it involved a human-hostile, confrontational hard takeoff in a "properly" regulated project. As a general category of failures I assign very little probability mass there. 5) I would argue that the first AGI doesn't require a Manhattan-scale project to construct, although I recognize that is a controversial opinion.
7gwern10y
Yes, it does: it requires obscene amounts of computing power, which require enormous extremely efficient multi-billion-dollar chip fabs to create, each of which currently cost more than the entire Manhattan Project did and draw upon exotic materials & specialties; see my discussion in http://www.gwern.net/Slowing%20Moore%27s%20Law You also need a lot of specialists to run a supercomputer. Amusingly, supercomputers have always been closely linked to nuclear bomb development, from the Manhattan Project to the national laboratories like Livermore. Only some nuclear technologies are inherent proliferation risks. Specific kinds of reactors, sure. But lots of other things like cesium for medicine? No way. Dead is dead. Does it matter if you're dead because an AGI hacked your local dam and drowned you or piloted a drone or developed a nuclear bomb? Any AGI is going to carry the risk of being misapplied in all the ways that humans have done harm with their general intelligence throughout history. What are you going to do, install DRM on it? The slow takeoff was the development of atomic bombs from custom low kilotonnage bombs which weighed tons and could only be delivered by slow vulnerable heavy bombers deployed near the target to mass-produced lightweight megatonnage warheads which could be fit on ICBMs and represented a global threat with no defense. I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it? Maybe, maybe not. If it didn't, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
2James_Miller10y
I would argue that the high tech world is, mostly unwittingly, currently undertaking a much bigger than Manhattan-scale project to construct AGI. Think of all the resources going into making computers smarter, faster, and cheaper. I don't believe that the Internet is going to wake up and automatically become an AGI, but markets are strongly pushing tech companies towards creating the hardware likely necessary for AGI.
1[anonymous]10y
It was very straightforward and transparent. But it was supposed to be an allegory, right? So what's the analog in the AGI interpretation? My point is that this isn't an arms race. The whole cold war concept doesn't make sense for AGI.
4gwern10y
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.) How would it not be an arms race?
1[anonymous]10y
How does that lead to hundreds of thousands dying in some impoverished foreign country? Gwern, it's your argument. The onus is on you to show there is any parallel at all. You've asserted there is. Why?
4gwern10y
Huh? That was what happened with the first use of nuclear bombs, it's not necessarily what will happen with AGI. We should be so lucky! I think you aren't understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself... Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician 'yes, this is very real and these weapons are really fucking dangerous' as a 'sputnik moment', politicians still did not take meaningful preventive action. Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you'll note I allude to his previous posts about how well he thinks elites dealt with past issues. No, I don't expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it's too late. Why do you think there won't be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further deve
0[anonymous]10y
It's only obvious to you, apparently. I don't believe AGI will be militarily useful, at least moreso than any other technology. Nor do I believe that AGI will be developed on a long enough time scale for an "arms race". Nor do I think politicians will be involved, at all.
0gwern10y
Other technologies have sparked arms races, so that seems like an odd position to take. If you're a 'fast takeoff' proponent, I suppose the parallels to nukes aren't of much value and you don't care whether the politicians would handle well or poorly a slow takeoff. I don't find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
0[anonymous]10y
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message -- hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years -- equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical "takeoff" period measured in months or a few years, depending on the underlying architecture. That's not what most people around here mean by a "fast takeoff", but it is still too fast for meaningful political reaction.
0drethelin10y
Chernobyl.
0[anonymous]10y
I'm asking about AGI technology....