Have you read the first two chapters of Thomas Schelling's 1966 Arms and Influence? It's around 50 pages.
The general gist is that if a lot of powerful Americans in the DoD take something seriously, such as preventing nuclear war, then foreign intelligence agencies will be able to hold that thing hostage in order to squeeze policy concessions out of the US.
It's a lot more complicated than that, since miscommunication, corruption, compartmentalization, and infighting all muddy the waters of what things are valued by any given military.
This does seem like an important issue to consider, but my guess is it probably shouldn't be a crux for answering OP's question (or at least, further explanation is needed for why it might be)? Putting aside concerns about flawed pursuit of a given goal, it would be surprising if the benefits of caring about a goal were outweighed by second order harms from competitors extracting concessions.
I bet that's true
But it doesn't seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today's world
The theory of impact here is that there's a lot of policy actions to slow down AI, but they're bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn...
Given that no-one's posted a comment in the affirmative yet:
I'd guess that more US national security engagement with AI risk is good. In rough order why:
I agree that there are risks with communicating AI risk concepts in a way that poisons the well, lacks fidelity, gets distorted, or fails to cross inferential distances, but these seem like things to manage and mitigate rather than give up on. Illustratively, I'd be excited about bureaucrats, analysts and program managers reading things like Alignment Problem from a Deep Learning Perspective, Unsolved Problems in ML Safety, or CSET's Key Concepts in AI Safety series; and developing frameworks and triggers to consider whether and when cutting-edge AI systems merit regulatory attention as dual use and/or high-risk systems a la the nuclear sector. (I include these examples as things that seem directionally good to me off the top of my head, but I'm not claiming they're the most promising things to push on in this space).
To be honest I'm just as afraid of aligned AGI as of unaligned AGI. An AGI aligned with the values of the PRC seems like a nightmare. If it's aligned with the US army it's only really bad, and Yudkowsky dath illan is not exactly the world I want to live in either...
I don't agree, because a world of misaligned AI is known to be really bad. Whereas a world of AI successfully aligned by some opposing faction probably has a lot in common with your own values
Extreme case: ISIS successfully builds the first aligned AI and locks in its values. This is bad, but it's way better than misaligned AI. ISIS want to turn the world into an idealized 7th Century Middle East, which is a pretty nice place compared to much of human history. There's still a lot in common with your own values
Is it possible they already are? I could certainly see AI risks being part of the risk associated with both nuclear and bio threats.
I'm not sure, others here with direct exposure can answer better, that funding is a limiting factor at this point. If not then the budget aspect doesn't matter. What other constraints might DoD involvement help relax?
As I understand it, the recent US semiconductor policy updates—e.g., CHIPS Act, export controls—are unusually extreme, which does seem consistent with the hypothesis that they're starting to take some AI-related threats more seriously. But my guess is that they're mostly worried about more mundane/routine impacts on economic and military affairs, etc., rather than about this being the most significant event since the big bang; perhaps naively, I suspect we'd see more obvious signs if they were worried about the latter, a la physics departments clearing out during the Manhattan Project.
Meant as a neutral question. I'm not sure whether this would be good or bad on net:
Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean:
Why this would be good:
Why this would be bad: