I'm being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/
I highly advise you to read as it goes into higher details as to how it exactly internally works.
This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can't hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don't exist in isolation.
Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.
How does militarisation of AI and so-called slaughterbots don't affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.
The board should have finished the job.
It should matter very little who I am, what should matter more is what I have.
Why have I written it? I think AI Alignment is necessary and I think what have been proposed here is a good idea, at least in theory and if not wholly then at least partly, and I think it can help with AI alignment.
We could use a combination of knowledge graphs, neural nets, logic modules and clarification through discussion to let AIs make nuanced deductions about ethical situations as they evolve. And while quantifying ethics is challenging, we quantitatively ...
Hi! I have a certain proposal that i wanted to make and get some feedback, one of the moderators have directed me here.
The name of the propsal is:
Supplementary Alignment Insights Through a Highly Controlled Shutdown Incentive
My proposal entails constructing a tightly restricted AI subsystem with the sole capability of attempting to safely shut itself down in order to probe, in an isolated manner, potential vulnerabilities in alignment techniques and then improve them.
Introduction:
Safely aligning powerful AI systems is an impor...
I think such a system where an AI in a sandboxed and airgapped environment is tasked with achieving a state where it is shut down at least once, while trying to overcome the guardrails and restrictions might prove quite usefil in finding out the weakspots in our barriers and then improving them.
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more effic... (read more)