It has recently been suggested (by yourself) that:
...Perhaps a better question would be "If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?" No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI - though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven't specialized in and if you think about it, yo
Let's talk actual hardware.
Here's a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.
Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System
Combined with sensors based on this patent : http://www.google.com/patents/US5686889
http://en.wikipedia.org/wiki/Gunfire_locator
and this one http://ieeexplore.ieee.org/xpl/login.jsp?tp=&...
Calling this "AI risk" seems like a slight abuse of the term. The term "AI risk" as I understand it refers to risks coming from smarter-than-human AI. The risk here isn't that the drones are too smart, it's that they've been given too much power. Even a dumb AI can be dangerous if it's hooked up to nuclear warheads.
(Trigger warning for atrocities of war.)
Human soldiers can revolt against their orders, but human soldiers can also decide to commit atrocities beyond their orders. Many of the atrocities of war are specifically human behaviors. A drone may bomb you or shoot you — very effectively — but it is not going to decide to torture you out of boredom, rape you in front of your kids, or cut off your ears for trophies. Some of the worst atrocities of recent wars — Vietnam, Bosnia, Iraq — have been things that a killer robot simply isn't going to do outside of anthrop...
Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.
Yes, this is a problem.
As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI
This is the sort of th...
When killer robots are outlawed, only rogue nations will have massive drone armies.
An ideal outcome here would be if counter-drones have an advantage over drones, but it's hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.
...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they're going to get it half-wrong. Sigh.
it would allow a small number of people to concentrate a very large amount of power
Possibly a smaller number than with soldiers, but not that small - you still need to deal with logistics, maintenance, programming...
it's unthinkable today that American soldiers might suddenly decide to follow a tyrannical leader tomorrow whose goal is to have total power and murder all opponents. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risk of mutiny.
It m...
Possible Solution: Legislation to ban lethal autonomy. (Suggested by Daniel Suarez, please do not confuse his opinion of whether it is likely to work with mine. I am simply listing it here to encourage discussion and debate.)
The barriers to entry in becoming a supervillan are getting lower and lower- soon just anybody will be able to 3D print an army of flying killer robots with lethal autonomy.
I think that the democracy worries are probably overblown. I'd be more worried about skyrocketting collateral damage.
It seems like a well publicized notarious event where a lethally autonomous robot killed a lot of innocent people would significantly broaden the appeal of friendliness research, and even could lead to disapproval of AI technology, similar to how Chernobyl had a significant impact on the current widespread disapproval of nuclear power.
For people primarily interested in existential UFAI risk, the likeliness of such an event may be a significant factor. Other significant factors are:
National instability leading to a difficult environment in which to do research
National instability leading to reckless AGI research by a group in attempt to gain an advantage over other groups.
Possible Solution: Using 3-D printers to create self-defense technologies that check and balance power.
Con: Everybody will probably die. This solution magnifies instability in the system. One person being any one of insane, evil or careless could potentially create an extinction event. At the very least they could cause mass destruction within a country that takes huge efforts to crush.
Is this like a one-woman topic, complete with discussion? A finished product?
Or perhaps it is merely a different way of formatting a discussion post, with the evident intention of making it easier to organise replies. As an experimental posting style this solution has, shall we say, pros and cons.
Don't there exist weapons that already exhibit the property of "lethal autonomy" - namely, land mines?
Possible Solution:
This sounds hard to implement because it would require co-operation from a lot of people, but if the alternative is that our technological progress means we are facing possible extinction (with the 3-D printer solution) or oppression (with the legislation "solution"), that might get most of the world interested in putting the effort into it.
Here's how I imagine it could work:
First, everyone concerned forms an alliance. This would have to be a very big alliance all over the world.
The alliance makes distinctions between we
A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy. Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.
He explains that a human decision-maker is not a necessity for combat drones to function. This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government. According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.
Daniel Suarez: The kill decision shouldn't belong to a robot
One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones. Drones are completely obedient but humans can throw a revolt. Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain. Drones won't provide this type of protection whatsoever. Obviously, relying on human decision making is not perfect. Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us. Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny. The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.
Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers. Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.
Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal. There are, obviously, pros and cons to this method. Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power. This method has pros and cons as well. I came up with a third method which is here. I think it's better than the alternatives but I would like more feedback.
As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI). That means it's up to us - the people - to develop our understanding of this subject and spread the word to others. Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this. I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue. I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.
To keep things organized, let's put each potential solution, pro and con into a separate comment.