I've been reading through this Superintelligence Strategy paper by Dan Hendrycks, Eric Schmidt, and Alexandr Wang. And, to me, it sounds like the authors are calling the current regime (MAIM aka a "Hot War") the "default" (which it probably is, tbh). But, also calling a peaceful, diplomatic, moratorium strategy "aspirational, but not a viable plan"? E.g.
MAIM Is the Default Regime: ... "Espionage, sabotage, blackmail, hackers, overt cyberattacks, targeting nearby power plants, kinetic attacks, threatening non-AI assets"... -- Superintelligence Strategy
This sounds like a violent "Hot War"?
Moratorium Strategy: "proposes halting AI development—either immediately or once certain hazardous capabilities, such as hacking or autonomous operations, are detected… aspirational, but not a viable plan" -- Superintelligence Strategy
This sounds like a non-violent peaceful diplomatic treaty-based solution.... aka a "Cold War"?
Is it just me or, as major thought leaders in the AI/AGI/ASI space, shouldn't the authors of this paper:
Realize that the current paradigm leads to a "Hot War", even if their recommended "solutions" are adopted.
Then, actually strongly advocate for a diplomatic and peaceful "Cold War" paradigm? E.g. Planning to completely pause when experts agree that the risk of extinction is "three in a million (a “6σ” threshold)—anything higher was too risky". A threshold that most AI Researchers would (likely) agree that we have flown waaay past by now (e.g. 5-20% p(doom) is more common?)
Instead of (strongly) advocating for a diplomatic and peaceful solution, they are just calling the Moratorium/Pause strategy "aspirational, but not viable"?
The paper's MAIM (Mutual Assured AI Malfunction) framework, suggests stability is maintained through the threat of mutually disabling AI systems. However, the proposed 'solutions' to make MAIM 'more stable' also seem pretty scary/destructive. E.g.
“How to Maintain a MAIM Regime:
… MAIM requires that destabilizing AI capabilities be restricted to rational actors… … states could improve their ability to maim destabilizing AI projects with cyberattacks…
(The “solutions” also sound very lucrative for certain companies/”industrial complexes” too. But, I’m sure that’s just a coincidence?)
This paper feels like a complete dig at any diplomatic and peaceful solution? I’m concerned many (many) others could also read this paper and agree that a diplomatic and peaceful solution is too "aspirational" to be worth pursuing?
I was concerned that I was “missing the point”, so I also watched this follow-up interview on the Non-zero podcast with Dan Hendrycks, specifically about MAIM.
And, well, that interview didn't clear anything up for me… E.g.
"I don't view this paper as very hawkish there. We're basically making fun of, in some ways, attempts to try and take over the world by building a superintelligence..." -- Dan Hendrycks https://youtu.be/O9P-fjSzJzs?t=2466
(Wha?! Did he just say in this paper "we're making fun of trying to take over the world with superintelligence?")
"As for regime change. Yeah I'm not advocating for trying to topple China... I think that the main route through which you would do that would be through building a super intelligence. But, that itself is extraordinarily risky." -- Dan Hendrycks https://youtu.be/O9P-fjSzJzs?t=3049
"Back in 2023 Mustafa Sulaman and Ian Brimer published a piece in I think Foreign Affairs... the natural way to do this is that the US and China get together and collectively coerce/encourage other countries to comply..." -- Robert Wright https://youtu.be/O9P-fjSzJzs?t=3115
(Diplomacy? What a good idea!)
"I'm not pushing to crush one of these super powers..." -- Dan Hendrycks
"I think the policies you favor are doing that, but maybe I'm wrong?" -- Robert Wright
"If one [super power] has super-intelligence and no one else does, that would be just extraordinarily destabilizing." -- Dan Hendrycks https://youtu.be/O9P-fjSzJzs?t=3408
So, what is Dan advocating for here exactly?
Lift the chip restrictions on China, so the US isn't the only super power with ASI?
The US and China should work together to control the chips (that China doesn't have access to)?
The chips should be available to more than one super power, but not China?
Don't get me wrong, this whole situation is a huge catch-22. But, please be clear and consistent with your message. E.g.
If these chips are too dangerous for China (and other actors to have/use) then maybe they are too dangerous for the US to use as well?
Is working toward a diplomatic treaty based solution after “certain hazardous capabilities” appear worth pursuing?
Should we work to have our risk tolerance stay near Compton's threshold (three in a million) rather than in double-digit territory? If we get into this double-digit territory, then should we strongly advocate for a diplomatic and peaceful solution? (Warning: we’re already there…)
One might argue that a moratorium is impossible to enforce, given the strong incentives for individual actors to defect. However, this doesn't negate the need to attempt a diplomatic solution, especially given the existential stakes.
If major thought leaders in the AI/AGI/ASI space, like the authors of this paper, can't get it together to transmit a super clear, coherent and peaceful/safety focused message… soon. Then, yeah we're (still) so doomed?
For those who disagree:
Please help me find where I'm wrong in the comments below...
I've been reading through this Superintelligence Strategy paper by Dan Hendrycks, Eric Schmidt, and Alexandr Wang. And, to me, it sounds like the authors are calling the current regime (MAIM aka a "Hot War") the "default" (which it probably is, tbh). But, also calling a peaceful, diplomatic, moratorium strategy "aspirational, but not a viable plan"? E.g.
This sounds like a violent "Hot War"?
This sounds like a non-violent peaceful diplomatic treaty-based solution.... aka a "Cold War"?
Is it just me or, as major thought leaders in the AI/AGI/ASI space, shouldn't the authors of this paper:
Instead of (strongly) advocating for a diplomatic and peaceful solution, they are just calling the Moratorium/Pause strategy "aspirational, but not viable"?
The paper's MAIM (Mutual Assured AI Malfunction) framework, suggests stability is maintained through the threat of mutually disabling AI systems. However, the proposed 'solutions' to make MAIM 'more stable' also seem pretty scary/destructive. E.g.
(The “solutions” also sound very lucrative for certain companies/”industrial complexes” too. But, I’m sure that’s just a coincidence?)
This paper feels like a complete dig at any diplomatic and peaceful solution? I’m concerned many (many) others could also read this paper and agree that a diplomatic and peaceful solution is too "aspirational" to be worth pursuing?
I was concerned that I was “missing the point”, so I also watched this follow-up interview on the Non-zero podcast with Dan Hendrycks, specifically about MAIM.
And, well, that interview didn't clear anything up for me… E.g.
(Wha?! Did he just say in this paper "we're making fun of trying to take over the world with superintelligence?")
(Diplomacy? What a good idea!)
So, what is Dan advocating for here exactly?
Don't get me wrong, this whole situation is a huge catch-22. But, please be clear and consistent with your message. E.g.
One might argue that a moratorium is impossible to enforce, given the strong incentives for individual actors to defect. However, this doesn't negate the need to attempt a diplomatic solution, especially given the existential stakes.
If major thought leaders in the AI/AGI/ASI space, like the authors of this paper, can't get it together to transmit a super clear, coherent and peaceful/safety focused message… soon. Then, yeah we're (still) so doomed?
For those who disagree:
For those who agree: