It doesn't.
How does militarisation of AI and so-called slaughterbots don't affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.
(Large scale) robot armies moderately increase my P(doom). And the same for large amounts of robots more generally.
The main mechanism is via making (violent) AI takeover relatively easier. (Though I think there is also a weak positive case for robot armies in that they might make relatively less smart AIs more useful for defense earlier which might mean you don't need to build AIs which are as powerful to defuse various concerns.)
Usage of AIs in other ways (e.g. targeting) doesn't have much direct effect particularly if these systems are narrow, but might set problematic precedents. It's also some evidence of higher doom, but not in a way where intervening on the variable would reduce doom.
Ehn. Kind of irrelevant to p(doom). War and violent conflict is disturbing, but not all that much more so with tool-level AI.
Especially in conflicts where the "victims" aren't particularly peaceful themselves, it's hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.
I'm being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/
I highly advise you to read as it goes into higher details as to how it exactly internally works.
I basically agree with John Wentworth here that it affects p(doom) not at all, but one thing I will say is that it kind of makes claims that humans will make decisions/be accountable once AI gets very useful rather uncredible.
More generally, one takeaway I see from the military's use of AI is that there are strong pressures to let them operate on their own, and this is going to be surprisingly important in the future.
Personally, I have gradually moved to seeing this as lowering my p(doom). I think humanity's best chance is to politically coordinate to globally enforce strict AI regulation. I think the most likely route to this becoming politically feasible is through empirical demonstrations of the danger of AI. I think AI is more likely to be legibly empirically dangerous to political decision-makers if it is used in the military. Thus, I think military AI is, counter-intuitively, lowering p(doom). A big accident that caused military AI to kill thousands of innocent people that the military had not intended to kill could be really great for p(doom).
This is a sad thing to think, obviously. I'm hopeful we can come up with harmless demonstrations of the dangers involved, so that political action will be taken without anyone needing to be killed.
In scenarios where AI becomes powerful enough to present an extinction risk to humanity, I don't expect that the level of robotic weaponry it has control over to matter much. It will have many many opportunities to hurt humanity that look nothing like armed robots and greatly exceed the power of armed robots.
While military robots might be bad for other reasons, I don't really see the path from this to doom. If AI powered weaponry doesn't work as expected, it might kill some people, but it can't repair or replicate itself or make long-term plans, so it's not really an extinction risk.
This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can't hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don't exist in isolation.
Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.
I haven't personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on making AI-piloted drones, rockets, information systems (hello, Skynet and AM), etc.
And the most recent and perhaps chilling use of it comes from the Israel's invasion of Gaza, where Israeli army has marked tens of thousands of Gazans as suspects for assassination, using Lavender AI targeting system with little human oversight and a permissive policy for casualties.
So how does all of it affect your p(doom) and what are your general thoughts on this and how do we counter that?
Relevant links:
https://www.972mag.com/lavender-ai-israeli-army-gaza/
https://www.wired.com/story/anduril-roadrunner-drone/
https://www.bloomberg.com/news/articles/2024-01-10/palantir-supplying-israel-with-new-tools-since-hamas-war-started