How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)?
I haven't personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on making AI-piloted drones, rockets, information systems (hello, Skynet and AM), etc. And the most recent and perhaps chilling use of it comes from the Israel's invasion of Gaza, where Israeli army has marked tens of thousands of Gazans as suspects for assassination, using Lavender AI targeting system with little human oversight and a permissive policy for casualties. So how does all of it affect your p(doom) and what are your general thoughts on this and how do we counter that? Relevant links: https://www.972mag.com/lavender-ai-israeli-army-gaza/ https://www.wired.com/story/anduril-roadrunner-drone/ https://www.bloomberg.com/news/articles/2024-01-10/palantir-supplying-israel-with-new-tools-since-hamas-war-started
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below... (read more)