Edit: this post currently has more downvotes than upvotes. The title, though broad, is uncontroversial: everything has its limits. Please point out flawed reasoning or need for further clarification in the comments instead of downvoting.
Let's agree that the first step towards AI alignment is to refrain from building intelligent machines that are designed to kill people. Very simple. As a global community, we need to agree completely on this topic.
Some will provide arguments in favor of intelligent lethal machines, such as the following:
That intelligent weapons kill with more precision, saving innocent lives.
That intelligent weapons do the most dangerous work, saving soldiers' lives.
Both of the above points are clearly valid. However, they do... (read more)
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.