In a very real sense, we did. The US and allies dictated the terms of the post-WWII world order, then did so again financially when the left the Bretton-Woods system and moved the world to fiat currencies, then did so again geopolitically when they dictated terms to post-Soviet Russia in the 1990s. Sure, there was a period where American dominance was uncertain, once they also get the atomic bomb, and that was a bit less clear while the USSR was still in place, but by the 1980s in was inevitable that they had lost, and in 1990 the Soviet Union fell. It's been another 20 years, and during a large part of that time, the leading hypothesis was that history had ended, with the US and the liberal order as the victor.
Also, the US did consider the possibility of waging a preemptive nuclear war on the USSR to prevent it from getting nukes. (von Neumann advocated for this I think?) If the US was more of a warmonger, they might have done it, and then there would have been a more unambiguous world takeover.
In the late 1940s and early 1950s nuclear weapons did not provide an overwhelming advantage against conventional forces. Being able to drop dozens of ~kiloton range fission bombs in eastern European battlefields would have been devastating but not enough by itself to win a war. Only when you got to hundreds of silo launched ICBMs with hydrogen bombs could you have gotten a true decisive strategic advantage
Perhaps. I don't know much about the yields and so forth at the time, nor about the specific plans if any that were made for nuclear combat.
But I'd speculate that dozens of kiloton range fission bombs would have enabled the US and allies to win a war against the USSR. Perhaps by destroying dozens of cities, perhaps by preventing concentrations of defensive force sufficient to stop an armored thrust.
Maybe we have different definitions of DSA: I was thinking of it in terms of 'resistance is futile' and you can dictate whatever terms you want because you have overwhelming advantage, not that you could eventually after a struggle win a difficult war by forcing your opponent to surrender and accept unfavorable terms.
If say the US of 1965 was dumped into post WW2 Earth it would have the ability to dictate whatever terms it wanted because it would be able to launch hundreds of ICBMS at enemy cities at will. If the real US of 1949 had started a war against the Soviets it would probably have been able to cripple an advance into western Europe but likely wouldn't have been able to get its bombers through to devastate enough of the soviet homeland with the few bombs they had.
Remember the soviets did just lose a huge percentage of their population and industry in WW2 and kept fighting. The fact that it's at all debatable who would have won if WW3 started in the late 1940s at all (see e.g. here) makes me think nuclear weapons weren't at that time a DSA producer.
I think the most relevant takeaway is that we did end up with an arsenal of weapons that have now put us, at all times, hours away from nuclear winter by a very reasonable metric of counterfactual possibility.
And while nuclear winter in practice probably wouldn’t be quite an extinction-level event from what I hear, it was still a very counterfactually close possibility that a nuke’s surprisingly runaway chain reaction could have been just a little more runaway.
Participants in this kind of dialogue should come in with a healthy respect for the likelihood that a big extinction risk will become salient when research figures out how to harness a new kind of power.
As much as it maybe ruins the fun for me to just point out the message: the major point of the story was that you weren't supposed to condition on us knowing that nuclear weapons are real, and instead ask whether the Gradualist or Catastrophist's arguments actually make sense given what they knew.
That's the situation I think we're in with Fast AI Takeoff. We're trying to interpret what the existence of general intelligences like humans (the Sun) implies for future progress on ML algorithms (normal explosives), without either a clear underlying theory for what the Sun's power really is, or any direct evidence that there'll be a jump.
That remark about the 'micro-foundational explanation for why the sun looks qualitatively new but really isn't' refers to Richard Ngo's explanation of why humans are so much better than chimps: https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B#13_1__Alignment_difficulty_debate__Richard_Ngo_s_case
Richard Ngo: You don’t have a specific argument about utility functions and their relationship to AGIs in a precise, technical way. Instead, it’s more like utility functions are like a pointer towards the type of later theory that will give us a much more precise understanding of how to think about intelligence and agency and AGIs pursuing goals and so on. And to Eliezer, it seems like we’ve got a bunch of different handles on what the shape of this larger scale theory might look like, but he can’t really explain it in precise terms. It’s maybe in the same way that for any other scientific theory, before you latch onto it, you can only gesture towards a bunch of different intuitions that you have and be like, “Hey guys, there are these links between them that I can’t make precise or rigorous or formal at this point.”
In my opinion the relevant detail is that we were not able to prevent the Soviets from getting the bomb. It took them all of about 3 years. It'll take China, Russia, open source hackers et al about 18 months max to replicate AGI once it arrives. So much for your decisive strategic advantage.
Nuclear technology is hours away from reducing the value of all human civilization by 10%, and for all we knew that figure could have been 100%. That’s the nuclear threat. I wouldn’t even classify that as a “geopolitical” threat. The fact that Soviet nuclear technology pretty quickly became comparable to US nuclear technology isn’t the most salient fact in the story. The story is that research got really close, and is still really close, to releasing hell, and the door to hell looks generally pretty easy to open.
18 months is more than enough to get a DSA if AGI turns out anything we fear (that is, something really powerful and difficult to control, probably arriving fast at such state through an intelligence explosion).
In fact, I'd even argue 18 days might be enough. AI is already beginning to solve protein folding (Alphafold). If it progresses from there and builds a nanosystem, that's more than enough to get a DSA aka take over the world. We currently see AIs like MuZero learning in hours what would take a lifetime for a human to learn, so it wouldn't surprise me an advanced AI solving advanced nanotech in a few days.
Whether the first AGI will be aligned or not is way more concerning. Not because who gets there first isn't also extremely important. Only because getting there first is the "easy" part.
I don't really think advanced AI can be compared to atomic bombs. The former is a way more explosive technology, pun intended.
Also the sun has incredibly low power density. This would not let you infer you could release enormous bursts of energy all at once.
As this essay is actually being written in a world where nuclear weapons are a thing, it becomes easy to cherry-pick the example of nuclear weapons. I can think of a number of things for which the catastrophist could have made a similar argument in the 19th century and just been wrong, like expecting everyone to get personal jetpacks, or to be able to routinely travel to Mars..
The debate continues on whether anti-matter bombs are possible or pose additional worrying dynamics.
The only good reference I know on that is the Gsponer Fourth Generation book, covering "subcritical fission-burn, magnetic compression, superheavy elements, antimatter, nuclear isomers, metallic hydrogen and superlasers"; the antimatter section discusses uses for things like H-bomb triggers & subcritical micro-nukes. Is there something more recent or better on anti-matter bombs?
Would the appropriate analogy to agents be that humans are a qualitatively different type of agent compared to animals and basic RL agents, and thus we should expect that there will be a fundamental discontinuity between what we have so far, and conscious agents?
In the late 19th century, two researchers meet to discuss their differing views on the existential risk posed by future Uncontrollable Super-Powerful Explosives.