You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Near-Term Risk: Killer Robots a Threat to Freedom and Democracy

10 Epiphany 14 June 2013 06:28AM

A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy.  Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.

He explains that a human decision-maker is not a necessity for combat drones to function.  This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government.  According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.

Daniel Suarez: The kill decision shouldn't belong to a robot

One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones.  Drones are completely obedient but humans can throw a revolt.  Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain.  Drones won't provide this type of protection whatsoever.  Obviously, relying on human decision making is not perfect.  Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us.  Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose.  It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.  The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.

Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers.  Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.

Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal.  There are, obviously, pros and cons to this method.  Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power.  This method has pros and cons as well. I came up with a third method which is here.  I think it's better than the alternatives but I would like more feedback.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI).  That means it's up to us - the people - to develop our understanding of this subject and spread the word to others.  Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this.  I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue.  I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.

To keep things organized, let's put each potential solution, pro and con into a separate comment.

9/11 as mindkiller

12 NancyLebovitz 12 September 2011 05:44PM

Noah Millman wrote:

In retrospect, what suffered the most lasting damage from the terrorist attacks of ten years ago was my belief in my own rationality. I believed that I was thinking things through seriously, and coming to difficult but true conclusions about what had happened, what would happen, what must happen. Here is part of what I wrote, to friends and family, several days later:
Our President has made it clear: we are at war. I do not anticipate that this will be a short or an easy war. Our enemy has operations in dozens of countries, including this one. He is supported, out of enthusiasm or fear, by many governments among our purported friends as well as among our enemies. He has shown his cunning, his ruthlessness, and most of all his patience, in his successful plot to kill thousands of innocents and bring down the symbols of our civilization. And in striking at him, as we must, we will bring down others who will in turn seek their own vengeance upon us.
There is not a single factual assertion in that paragraph that I had any reason to believe I could substantiate. I did not know anything about the enemy. I had no idea whether or not there were “operations” in dozens of countries – I don’t even know what I meant by “operations.” I know what I was referring to with the business about being “supported” by friends and enemies, but “support” is a deliberately fuzzy word; I wouldn’t have used it if I was trying to make a concrete assertion with clear implications. The purpose of that assertion, like everything else, was to build up my first assertion. We were at war. And it wouldn’t be short or easy. Because that conclusion, though grim, was one that imparted meaning to the murder of 3,000 people. I thought I was being serious – examining the facts, calculating the likely negative consequences of necessary action, preparing myself for the unfortunate necessities of life. But I wasn’t doing anything of the kind. I was engaged in a search for meaning in which reason was purely instrumental.

Link (which includes additional good retrospectives) thanks to Ampersand.

This article may have more political content than is suitable for LW-- if you'd rather discuss it elsewhere, I've linked it at my blog. I've posted about it here because it's an excellent example of updating and of recognizing motivated cognition even if well after the fact.