It's nice that the Less Wrong hoi polloi get to comment on a strategy document that has such an elite origin. Coauthors include Eric Schmidt, who may have been the most elite-influential thinker on AI in the Biden years, and xAI's safety advisor @Dan H, who can't be too far removed from David Sacks, Trump's AI czar. That covers both sides of American politics; the third author, Alexandr Wang, is also American, but he's Chinese-American, so it is as if we're trying to cover all the geopolitical factions that have a say in the AI race.
However, the premises of the document are simply wrong ("in my opinion"). Section 3.4 gives us the big picture, in that it lists four strategies for dealing with the rise of superintelligence: Hands Off Strategy, Moratorium Strategy, Monopoly Strategy, and Multipolar Strategy, the latter being the one argued for in this paper. And the Multipolar Strategy argued for, combines mutual assured destruction (MAD) between Chinese and American AI systems, and consensus to prevent proliferation of AI technology to other actors such as terrorists.
I get that this is hardheaded geostrategic thinking. It is a genuine advance on that front. But - the rise of superintelligence means the end of human rule on Earth, no matter who makes it. The world will be governed either by a system of entirely nonhuman AIs, or entities that are AI-human hybrids but in which the AI part must necessarily dominate, if they are to keep up with the "intelligence recursion" mentioned by the paper.
Section 4.1 goes into more detail. US or Chinese bid for dominance is described as unstable, because eventually you will get a cyber war in which the AI infrastructure of both sides is destroyed. A mutual moratorium is also described as unstable, because either side could defect at any time. The paper claims that the most stable situation, which is also the default, is one in which the mutually destructive cyber war is possible, but neither side initiates it.
This is a new insight for me - the idea of cyber war targeting AI infrastructure. It's a step up in sophistication from "air strikes against data centers". And at least cyber-MAD is far less destructive than nuclear MAD. I am willing to suppose that cyber-MAD already exists, and that this paper is an attempt to embed the rise of AI into that framework.
But even cyber-MAD is unstable, because of AI takeover. The inevitable winner of an AI race between China and America is not China or America, it's just some AI. So I definitely appreciate the clarification of interstate relations in this penultimate stage of the AI race. But I still see no alternative to trying to solve the problem of "superalignment", and for me that means making superintelligent AI that is ethical and human-friendly even when completely autonomous - and doing that research in public, where all the AI labs can draw on it.
Comments on a few aspects:
Regardless of what one thinks of your philosophy, I find your psychology ontologically interesting. It is interesting that a mind can end up in such a state, with such beliefs. Nonetheless, I believe in time and persistent personhood, as do most people, so most of humanity lacks the specific psychological impetus towards your philosophy, that comes from timelessness and depersonalization.
E.g. does it give the Rawlsian veil of ignorance an extra concreteness for you? It's not just that you could have been any other person; there's no persistent personhood which connects your actual person-moment more strongly to any others in an intrinsic way. On the level of personal identity (as opposed to causality), you are equally connected or disconnected from all other moments of consciousness.
Contrast that with a belief that time and change is real, and that one is a specific person persisting in time through a single stream of consciousness and unconsciousness... Under such circumstances, even if one regards oneself as a random sample from among all possible persons, nonetheless, you know you're a particular person with a very specific circumstance, and all that you will ever experience personally is the future of that person. This easily leads to value systems in which one is only or primarily concerned with oneself.
On the other hand, I do have some logical sympathy to the thought of myself as a randomly sampled possible person. But it raises the question of how to deal with aspects of one's situation that are very rare. Lots of descendants for humanity would imply we are unusually early in time; the alternative is the doomsday argument, for which I have no intellectual objections, so that's OK. But what about just being conscious, and conscious of even this much of reality?
Whether or not you're a panpsychist, beings with even an average human's level of intelligence and consciousness ought to be ultra-rare in a universe where the vast majority of the entities are atoms and elementary particles drifting in space. So I can regard myself as a randomly sampled being from such a universe, that happened to win the existential lottery; or I could reject that model of the universe, in favor of one where the average entity has my degree of mental complexity. That can lead in a variety of exotic directions, but none of them has ever felt like an obviously correct replacement for my usual belief; so by default I lapse back into thinking of myself as a complex conscious being in a universe full of much simpler beings, and with this problem of typicality being unresolved.
The problem I have with such speculative analyses is that standard psychiatric categories have such an intense bias towards a certain notion of normalcy. Any behavior or ideation that is out of the ordinary in any way can become evidence of a disorder. If someone was actually having an intense period of inspired achievement accompanied by passionate emotions, wouldn't it still register as "mania" or "hypomania"? In such a case, I might prefer a school of thought like Kazimierz Dabrowski - at least it acknowledges that there is such a thing as high achievement, with its own associated positive psychology.
In Elon's case, if I was trying to understand his state of mind, I would start by looking for precedents in his business career, for the situation he currently faces. Perhaps with DOGE and the government audit, it's a bit like when he first took over Twitter, and didn't have a new system in place. Perhaps the bromance with Trump resembles times that he partnered with Peter Thiel... Anyway, if pop psychiatry starts becoming a drag, they can always have RFK Jr declare it to be a politicized pseudoscience!