A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy.  Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.

He explains that a human decision-maker is not a necessity for combat drones to function.  This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government.  According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.

Daniel Suarez: The kill decision shouldn't belong to a robot

One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones.  Drones are completely obedient but humans can throw a revolt.  Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain.  Drones won't provide this type of protection whatsoever.  Obviously, relying on human decision making is not perfect.  Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us.  Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose.  It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.  The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.

Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers.  Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.

Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal.  There are, obviously, pros and cons to this method.  Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power.  This method has pros and cons as well. I came up with a third method which is here.  I think it's better than the alternatives but I would like more feedback.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI).  That means it's up to us - the people - to develop our understanding of this subject and spread the word to others.  Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this.  I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue.  I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.

To keep things organized, let's put each potential solution, pro and con into a separate comment.

New Comment
105 comments, sorted by Click to highlight new comments since: Today at 6:06 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It has recently been suggested (by yourself) that:

Perhaps a better question would be "If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?" No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI - though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven't specialized in and if you think about it, yo

... (read more)
0Epiphany11y
I do make that mistake sometimes, however, this is not one of those times: * A. Whether I am knowledgeable here isn't very important (as opposed to the context in which I wrote that comment). I am not even advising people to agree on a particular strategy, I am spreading the word and getting them to think about it. Even if I tried to advise them, I don't expect LessWrong would take my ideas at face value and blindly follow them. In this case, evaluating and expressing opinions on this subject serves the purpose of getting people to think. Getting people to think is important in this case because this particular problem is likely to require that a large number of people get involved in their own fate. They're the ones that currently provide the checks and balances on government power. If they simply let the powerful decide amongst themselves, they may find that the powerful choose to maximize their power. Unfortunately, I don't currently know of anyone who is qualified and trustworthy enough to advise them on what's likely to happen and which method is likely to succeed, but at least stirring up debate and discussion will get them thinking about this. The more people think about it now, the more likely they are to have a decently well informed opinion and make functional choices later on. My knowledge level is adequate for this particular purpose. * B. Why should I specifically do this? Several reasons, actually: * Nobody else is currently doing it for us: There are no parties of sufficient size that I know of who are taking responsibility for spreading the word on this to make sure that a critical mass is reached. I've scoured the internet and not found a group dedicated to this. The closest we have, to my knowledge, is Suarez. Suarez is an author, and he seems bright and dedicated to spreading the word. I'm sure he's done research and put thought into this, and he is getting attention, but he's not enough. This cause needs an effort much larger
0wedrifid11y
I perceive plenty of risks regarding future military technology that are likely to result in the loss of life and liberty. People with power no longer requiring the approval (or insufficient disapproval) of other human participants to maintain their power is among the dangers. Increased ease of creating extremely destructive weapons (including killer robots) without large scale enterprise (eg. with 3D printers you mentioned) is another. This issue is not one I expect to have any influence over. This is a high stakes game. A national security issue and an individual 'right to bear arms' issue rolled into one. It is also the kind of of game where belief in doomsday predictions is enough to make people (or even a cause) lose credibility. To whatever extent my actions could have an influence at all I have no particular confidence that it would be in a desirable direction. Evangelism is not my thing. Even if it was, this wouldn't be the cause I chose to champion.
0Epiphany11y
I don't expect to have a large influence over it, but for a small investment, I make a small difference. You said once yourself that if your life could make even a miniscule difference to the probability that humanity survives, it would be worth it. And if a 1/4,204,800 sized fraction of my life makes a 0.000000001% difference in the chance that humanity doesn't lose democracy, that's worth it to me. Looking at it that way, does my behavior make sense? Ok. I feel like you should be saying that to yourself - you're the one who said you thought the 3-D printer idea would result in everyone dying. I think the worst thing I said is that killer robots are a threat to democracy. Did you find something in my writing that you pattern matched to "doomsday prediction"? If so, I will need an example. Spending 1/4,204,800 of my life to spread the word about something is best categorized as "doing my part" not "championing a cause". Like I said in my last comment: "I have no intentions of dedicating my life to this issue." After considering the amount of time I spent on this and the clear statement of my intentions (or lack of intentions), do you agree that I was never trying to champion this cause and was simply doing my part, wedrifid?
0wedrifid11y
I suggested that Eliezer's analysis of economic growth and FAI is more relevant to Eliezer (in terms of his expertise, influence and comparative advantage) than military robot politics is to all of us (on each of the same metrics). To resolve the ambiguity there, I do not take the position that talk of robot killers is completely worthless. Instead I take the position that Eliezer spending a day or so analysing economic growth impacts on his life's work is entirely sensible. So instead of criticising your behavior I am criticising your criticism of another behaviour that is somewhat similar. I perceive a difference between the social consequences of replying with a criticism of a "right to bear automated-killer-robot arms" proposal in a comment and the social consequences of spreading the word to people I know (on facebook, etc.) about some issue of choice. Yes. My use of 'doomsday' to describe that scenario is lax. Please imagine that I found a more precise term and expressed approximately the same point. Please note that the quote that mentions 'championing a cause' was explicitly about myself. It was not made as a criticism of your behavior. It was made as a direct, quote denoted reply to your call for readers (made in response to myself) to evangelise to people we know on 'facebook, twitter and other forums'. I was explaining why I do not choose to do as you request even though by my judgement I do, in fact, "get it". Taking a stance and expressing concern about something that isn't a mainstream issue comes with a cost. Someone who is mainstream in all ways but one tends to be more influential when it comes to that one issue than someone who has eccentric beliefs in all areas.
0Epiphany11y
Oh okay. I see. I thought you were making some different comparison. Okay. (: Okay, noted. I'm glad that you get it enough to see the potential benefit of spreading the word even though you choose not to because you anticipate unwanted social consequences instead. Hahaha! Yeah, I can see that. Though this really depends on who your friends are or which friend group one chose to spread the idea to. At this stage, it is probably best to spread the word only to those who Seth Godin calls "early adopters" (defined as: people who want to know everything about their subject of interest aka nerds). This would be why I told LessWrong as opposed to some other group.

Let's talk actual hardware.

Here's a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.

Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System

Combined with sensors based on this patent : http://www.google.com/patents/US5686889

http://en.wikipedia.org/wiki/Gunfire_locator

and this one http://ieeexplore.ieee.org/xpl/login.jsp?tp=&... (read more)

2Epiphany11y
Thanks for the hardware info. In the short-term... What do you think about the threat they pose to democracy? Do you happen to know how many humans need to be employed for a given quantity of these weapons to be produced?
2GeraldMonroe11y
I wanted to make a concrete proposal. Why does it have to be autonomous? Because in urban combat, the combatants will usually choose a firing position that has cover. They "pop up" from the cover, take a few shots, then position themselves behind cover again. An autonomous system could presumably accurately return fire much faster than human reflexes. (it wouldn't be instant, there's a delay for the servos of the automated gun to aim at the target, and delays related to signals - you have to wait for the sound to reach all the acoustic sensors in the drone swarm, then there's processing delays, then time for the projectiles from the return fire to reach the target) Also, the autonomous mode would hopefully be chosen only as a last resort, with a human normally in the loop somewhere to authorize each decision to fire. As for a threat to democracy? Defined how? You mean a system of governance where a large number of people, who are easily manipulated via media, on the average know fuck-all about a particular issue, are almost universally not using rational thought, and the votes give everyone a theoretically equal say regardless of knowledge or intelligence? I don't think that democracy is something that should be used as an ideal nor a terminal value on this website. It has too many obvious faults. As for humans needing to be employed : autonomous return fire drones are going to be very expensive to build and maintain. That "expense" means that the labor of thousands is needed somewhere in the process. However, in the long run, obviously it's possibly to build factories to churn them out faster than replacing soldiers. Numerous examples of this happened during ww2, where even high technology items such as aircraft were easier to replace than the pilots to fly them.
1Houshalter10y
Democracy is imperfect, but dictatorships are worse. I honestly don't think this is the case. Hobbyists working on their own with limited budgets have made autonomous paintball guns, as well as all sorts of other robots and UAVs. Conceivably robots could be incredibly cheap, much much cheaper than the average soldier.

Calling this "AI risk" seems like a slight abuse of the term. The term "AI risk" as I understand it refers to risks coming from smarter-than-human AI. The risk here isn't that the drones are too smart, it's that they've been given too much power. Even a dumb AI can be dangerous if it's hooked up to nuclear warheads.

2wedrifid11y
I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) "AI risk" in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as "(weak) AI risk" to prevent confusion. However looking at the original post it seems the author already talks about "near-term tool AI" as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.
0Epiphany11y
I originally had "AI risk" in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term "AI risk" shouldn't leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.
2Luke_A_Somers11y
The issue seems to me AI that have too much power over people without being friendly. Whether they get this power by being handed a gun or by outsmarting us doesn't seem as relevant.
0Epiphany11y
No. Actually. That is not the risk I'm discussing here. I would not argue that it isn't dangerous to give them the ability to kill. It is. But I do argue that my point here is that lethal autonomy could give people too much power - that is to say, to redistribute power unevenly, undoing all the checks and balances and threatening democracy.
-2Epiphany11y
According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the "Artificial Intelligence and Robotics" gallery. It's not smarter than a human - all it can do is play a game and beating humans at a game does not qualify as being smarter than a human. The dictionary doesn't define it that way, apparently all it needs to do is something like perceive and recognize shapes. And what about the term "tool AI"? Why should I agree that AI always means "smarter than human"? I thought we had the term AGI to make that distinction. Maybe your point here is not that AI always means "smarter than human" but that "AI risk" for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI - that software can certainly be quite dangerous because of it's intelligence even if it is not as intelligent as humans.

(Trigger warning for atrocities of war.)

Human soldiers can revolt against their orders, but human soldiers can also decide to commit atrocities beyond their orders. Many of the atrocities of war are specifically human behaviors. A drone may bomb you or shoot you — very effectively — but it is not going to decide to torture you out of boredom, rape you in front of your kids, or cut off your ears for trophies. Some of the worst atrocities of recent wars — Vietnam, Bosnia, Iraq — have been things that a killer robot simply isn't going to do outside of anthrop... (read more)

1WingedViper11y
That is indeed a fair point, but I think it is not so important when talking about a tyrant gaining control of his own country. Because the soldiers in Iraq, Bosnia etc. saw the people they tortured (or similar) not as people, but as "the Enemy". That kind of thing is much harder to achieve when they are supposed to be fighting their own countrymen.
0Epiphany11y
I agree that the killer robots on the horizon won't have a will to commit atrocities (though I'm not sure what an AGI killer robot might do), however, I must note that this is a tangent. The meaning of the term "atrocity" in my statement was more to indicate things like genocide and oppression. I was basically saying "humans are capable of revolting in the event that a tyrant wants to gain power whereas robots are not". I think I'll replace the word atrocities for clarity.

Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.

Yes, this is a problem.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI

This is the sort of th... (read more)

1Epiphany11y
I absolutely scoured the internet about 6 months ago looking for any mention of checks and balances, democracy, power balances and killing robots, AI soldiers, etc. (I used all the search terms I could think of to do this) and didn't find them. Is this because they're miniscule in size, don't publish much, use special jargon or for some other reason? Do you know whether any of them are launching significant public education campaigns? (I assume not, based on what I have seen, but I could be wrong.) I would very much like links to all relevant web pages you know of that talk specifically about the power imbalances caused by using machines for warfare. Please provide at least a few of the best ones if it is not too much to ask. Thanks for letting me know about this. I have asked others and gotten no leads!

When killer robots are outlawed, only rogue nations will have massive drone armies.

An ideal outcome here would be if counter-drones have an advantage over drones, but it's hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.

...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they're going to get it half-wrong. Sigh.

0Yosarian211y
That's not necessarily a huge issue. If all the major powers agree to not have automated killing drones, and a few minor rogue states (say, Iran) ignore that and develop their own killer drones, then (at least in the near term) that probably won't give them a big enough advantage over semi-autonomous drones controlled by major nations to be a big deal; an Iranian automated drone army probably still isn't a match for the American military, the American military has too many other technological advantages. On the other hand, if one or more major powers start building large numbers of fully autonomous drones, then everyone is going to. That defiantly sounds like a scenario we should try to avoid, especially since that kind of arms race is something that I could see eventually leading to unfriendly AI.
-2Eugine_Nier11y
One issue is how easy it is to secretly build an army of autonomous drones?
0Yosarian211y
Developing the technology in secret is probably quite possible. Large-scale deployment, though, building a large army of them, would probably be quite hard to hide, especially from modern satellite photography and information technology.
0Eugine_Nier11y
Why? Just build a large number of non-autonomous drones and then upgrade the software at the last minute.
0Yosarian211y
I suppose. Would that really give you enough of an advantage to be worth the diplomatic cost, though? The difference between a semi-autonomous Predator drone and a fully-autonomous Predator drone in military terms doesn't seem all that significant. Now, you could make a type of military unit that would really take advantage of being fully autonomous and have a real advantage, like a fully autonomous air-to-air fighter for example (not really practical to do with semi autonomous drones because of delayed reaction time), but it would seem like that would be much harder to hide.
0atucker11y
I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.
4Epiphany11y
Is there not a way to shield combat drones from EMP weapons? I wouldn't be surprised if they are already doing that.
5atucker11y
Almost certainly, but the point that stationary counter-drones wouldn't necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.
0Epiphany11y
I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy. To the strategy, I say: I've heard that defense is more difficult than offense. If the strategy you have defined is basically: Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably). Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones. I'm not sure how to check to see if what I heard was true but if defense worked that well, we wouldn't have war.
0atucker11y
This distinction is just flying/not-flying. Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed. I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.
0ikrase11y
Also, what's offense and what's defense? Anti-aircraft artillery (effective against drones? I think current air drones are optimized for use against low-tech enemies w/ few defenses) is a "defense" against 'attack from the air', but 'heat-seeking AA missles', 'flack guns', 'radar-guided AA missiles' and 'machine gun turrets' are all "offenses" against combat aircraft where the defenses are evasive maneuvers, altitude, armor, and chaff/flare decoys. In WWI, defenses (machine guns and fortifications) were near-invincible, and killed attackers without time for them to retreat. I think that current drones are pretty soft and might even be subject to hacking (seem to remember somethign about unencrypted video?) but that would change as soon as somebody starts making real countermeasures.
0shminux11y
Gain enough status to make that someone likely to be you.
8Eliezer Yudkowsky11y
That is not how government contracts work.
-2Epiphany11y
This took effort to parse. I think what you're saying is: * If we're going to have killer drones, there needs to be something to check their power. Example: counter-drones. * If we're going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones. * If counter-counter-drones can dominate the original drones, then counter-drones probably aren't strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.) (I want everyone to understand, so I'm writing it all out - let me know if I'm right.) And you propose "no asymmetrical advantage of guerilla drone warfare... etc" which isn't clear to me because I can interpret multiple meanings: * Trash the drones vs. counter-drones vs. counter-counter-drones idea? * Make sure drones don't have an advantage at guerilla drone warfare? * Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling? What did your statement mean? I think if we're going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike "drone / counter-drone". Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I'm curious about what pros and cons you would think of.
3wedrifid11y
This isn't quite what Eliezer said. In particular Eliezer wasn't considering proposals or 'what we need' but instead making observations about scenarios and the implications they could have. The key point is the opening sentence: This amounts to dismissing Suarez's proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms. This isn't saying we need drones (or counter or counter-counter drones). It rather saying: * We don't (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof. * It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...) * The above doesn't seem likely. Bugger. This wouldn't be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.
2Epiphany11y
Yeah, I got that, and I think that his statement is easy to understand so I'm not sure why you're explaining that to me. If you hadn't noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as "making them illegal is absurd". He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you). I didn't say that he was saying that either. I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn't think he was talking about that - it doesn't seem like a thing that needs saying. Maybe what he meant was not "people will decide whether to fight based on whether it's likely to succeed" or "people will make formal agreements" but something more like "using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making". What if that's not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars... Now "the great filter" comes to mind again. :| Do you know of anyone who has written about: A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it's more likely for people to initiate wars when there's a lot of uncertainty. We might be lucky - maybe people are far less likely to initiate wars if it isn't clear who will win... I'd like to read about this topic if there's information on it.
0wedrifid11y
* You wrote a comment explaining what Eliezer meant. * You were wrong about what Eliezer meant. * You explicitly asked to be told whether you were right. * I told you you were not right. * I made my own comment explaining what Eliezer's words mean. Maybe you already understood the first sentence of Eliezer's comment and only misunderstood the later sentences. That's great! By all means ignore the parts of my explanation that are redundant. Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man. The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer's comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer's comment correctly. The best I can do is explain Eliezer's comment in my own words and you can take or leave each part of it. It is frustrating not being rewarded for one's contributions when others are. Let me rephrase. The following quote is not something Eliezer said: ---------------------------------------- Eliezer didn't say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this 'obvious' one, you assumed that it was a proposal: ---------------------------------------- Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be le
2Epiphany11y
Hmm. I wonder if this situation is comparable to any of the situations we know about. 1. Clarifies my questions: * When humans feel confused about whether they're likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble? * If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war? * When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The "obvious" answer to this is yes, but it's not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the "backed into a corner" instinct. For some ideas about what those situations might be, I'd wonder about situations in which people over-react to confusion by "erring on the side of caution" (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent. * Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn't exist? Now I'm thinking about whether we have data that answers these or similar questions.
-2Eugine_Nier11y
I think a more important question than "how likely am I to win this conflict?" is "will my odds increase or decrease by waiting?"
0Epiphany11y
Sorry for not seeing this intention. Thanks for your efforts. Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I'm am not sure what comment you're referring to.

Are a few people with killer drones more dangerous than a few people with nukes?

4WingedViper11y
Yes they are, because nukes can only be aimed once and then destroy the targets (so they are just a direct threat) while autonomous robots can be used to control all kinds of stuff (checkpoints, roads, certain people). Also they allow much more accurate killing while nukes have a huge area of effect. Also I think (that is speculation, admittedly) that you would need fewer people to control a drone army than nukes of comparable destructive power.
0ikrase11y
I disagree strongly. (it depends on the size of the drone army, and what sort of people they are.) Drone army can probably be approximated as a slavishly loyal human army Terrorists would probably go for the nuke if they thought it achievable. Rouge states are probably more dangerous with a (large, robust) drone army because it can reduce the ability of a human military to revolt, and possibly do other things.
1WingedViper11y
What do you disagree strongly with? My speculation that you would need fewer people to control them? I'm not sure about that so if you can bring in a good argument you can change my view on that. Terrorists are not our problem (in general and in this specific state). Terrorists with nukes cannot feasably control a country with them. I am talking about people that have easy access to drones and want to control a country with them. Traditional totalitarian techniques plus drones is what I am really worried about, not terrorists. So I admit that with "a few people with drones vs. nukes" I thought about a (close to) worst case. Obviously some low tech terrorists in Afghanistan are not a real substantial problem when they control drones, but high military officials with power fantasies are. Of course rouge states with drones are even more dangerous...
0ikrase11y
I think a rogue state with drones is about as dangerous as a rogue state with a well-equipped army. (note: all of this pretty much assumes something like the next ten to fifty years of physical tech, and that drone AIs are very focused. If AI supertech or extremely tiny deadly drones come into it, it gets much worse.) I think that drone armies compared to human armies are better for short-term slavish loyalty (including cases where the chain of command is broken). However, unless they are controlled by a stable, central entity (such as the case where a tyrant uses a drone army to suppress rebellion) with all the infrastructure then maintenance and a wide variety of other issues start to become a big problem. You might get some Napoleons. I also think that drones are likely to be poor substitutes for infantry out of actual combat.

it would allow a small number of people to concentrate a very large amount of power

Possibly a smaller number than with soldiers, but not that small - you still need to deal with logistics, maintenance, programming...

it's unthinkable today that American soldiers might suddenly decide to follow a tyrannical leader tomorrow whose goal is to have total power and murder all opponents. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risk of mutiny.

It m... (read more)

-3Epiphany11y
Ok let's get some numbers. I highly doubt that either one of us would be able to accurately estimate how many employees it would require to make a robot army large enough to take over a population, but looking at some numbers will at least give us some perspective. I'll use the USA as an example. The USA has 120,022,084 people fit for military service according to Wikipedia. (The current military is much smaller, but if there were a takeover in progress, that's the maximum number of hypothetical America soldiers we could have defending the country.) We'll say that making a robot army takes as many programmers as Microsoft and as many engineers and factory workers as Boeing: Microsoft employees: 97,811 Boeing employees: 171,700 That's 0.22% of the number of soldiers. I'm not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%. Is it possible that 1 in 200 people or so are crazy enough to build and maintain a robot army for a tyrant? Number of sociopaths: 1 in 20. And you wouldn't even have to be a sociopath to follow a new Hitler. I like that you brought up the point that it would take a significant number of employees to make a robot army happen, but I'm not convinced that this makes us safe. This is especially because they could do something like build military robots that are very close to lethal autonomy but not quite, tell people they're making something else, make software to run the basic functions like walking and seeing, and then have a very small number of people make modifications to the hardware and/or software to turn them into autonomous killers. Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people. How many employees would have to be aware of their true ambitions? That might be the key question.
2RolfAndreassen11y
Excuse me? You are taking the number of military-age males and using it as the number of soldiers! The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that "lethal autonomy" is nowhere near the same thing as "operational autonomy". A Predator drone requires more people to run it - fuelling, arming, polishing the paint - than a fighter aircraft does. How? "Do as I say, or else I'll order you to fire up the drones on your base and have them shoot you!" And while you might credibly threaten to instead order the people on the next base over to fire up their drones, well, now you've started a civil war in your own armed forces. Why will that work better with drones than with rifles? Again, you are confusing lethal with operational autonomy. A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing. It still has to be pointed in the general direction of the enemy, loaded, fuelled, and launched; and you still have to convince the people doing the work that it needs to be done.
9gwern11y
It does? I would've guessed the exact opposite and that the difference would be by a large margin: drones are smaller, eliminate all the equipment necessary to support a human, don't have to be man-rated, and are expected to have drastically less performance in terms of going supersonic or executing high-g maneuvers.
0Randaly11y
Yes. An F-16 requires 100 support personnel; a Predator 168; a Reaper, 180. Source. It seems like some but not all of the difference is that manned planes have only a single pilot, whereas UAV's not only have multiple pilots, but also perform much more analysis on recorded data and split the job of piloting up into multiple subtasks for different people, since they are not limited by the need to have only 1 or 2 people controlling the plane. If I had to guess, some of the remaining difference is probably due to the need to maintain the equipment connecting the pilots to the UAV, in addition to the UAV itself; the most high-profile UAV failure thus far was due to a failure in the connection between the pilots and the UAV.
0gwern11y
I'm not sure that's comparing apples and oranges. From the citation for the Predator figure: I'm not sure how long the average mission for an F-16 is, but if it's less than ~12 hours, then the Predator would seem to have a manpower advantage; and the CRS paper cited also specifically says:
0Randaly11y
The F-16 seems to have a maximum endurance of 3-4 hours, so I'm pretty sure its average mission is less than 12 hours. My understanding was that Rolf's argument depended on the ratio personnel:plane, not on the ratio personnel:flight hour; the latter is more relevant for reconnaissance, ground attack against hidden targets, or potentially for strikes at range, whereas the former is more relevant for air superiority or short range strikes.
0gwern11y
I don't think it saves Rolf's point: If you are getting >6x more flight-hours out of a drone for 6x for an increased man power of <2x - even if you keep the manpower constant and shrink the size of the fleet to compensate for that <2x manpower penalty, you've still got a new fleet which is somewhere around 6x more lethal. Or you could take the tradeoff even further and have an equally lethal fleet with a small fraction of the total manpower, because each drone goes so much further than its equivalent. So a drone fleet off similar lethality does have more operational autonomy! That's why per flight hour costs matter - because ultimately, the entire point of having these airplanes is to fly them.
-4Epiphany11y
Would you happen to be able to provide these figures: The ratio of human resources-to-firepower on the current generation of weapons. The ratio of human resources-to-firepower on the weapons used during eras where oppression was common. I'd like to compare them. Hmm, "firepower" is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
2gwern11y
I have no idea; as I said, my expectations are just guesses based on broad principles (slow planes are cheaper than ultra-fast planes; clunk planes are cheaper than ultra-maneuverable ones; machines whose failure do not immediately kill humans are cheaper to make than machines whose failure do entail human death; the cheapest, lightest, and easiest to maintain machine parts are the ones that aren't there). You should ask Rolf, since apparently he's knowledgeable in the topic.
0Epiphany11y
Thanks. I will ask Rolf.
-1Epiphany11y
Would you happen to be able to provide these figures: The ratio of human resources-to-firepower on the current generation of weapons. The ratio of human resources-to-firepower on the weapons used during eras where oppression was common. I'd like to compare them. Hmm, "firepower" is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
-2Epiphany11y
Yes! If the question here is "How many people are currently in the military" my figure is wrong. However, that's not the question. The question is "In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?" You're estimating in a different context than the one in my comment. Actually, if you're defining "operational autonomy" as "how many people it takes to run weapons", I did address that when I said "I'm not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%." If you have better estimates, would you share them? Method A. They could wait until the country is in turmoil and prey on people's irrationality like Hitler did. Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn't have to be sudden - it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don't realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive. Method C. Check out the Milgram experiment. Those people didn't even need to be coerced to apply lethal force. It's a lot easier than you think. Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on. This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn't matter how many people it takes to operate a predator dro
0RolfAndreassen11y
What prevents these methods from being used with rifles? What is special about robots in this context? No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it's in a position to make the decision to kill, is what we cannot yet do programmatically. You're confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years. Ok, so this being so, how come we don't already have oppressive societies being run with plain old rifles?
-4Randaly11y
This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn't have the many trillions of dollars needed to pay them (since nobody has that much money) [1], who is unable to keep secret the millions of people building illegal weaponry for him, and who almost no chance at succeeding even with the robot army, since the US military outspends everybody. [1]: 1/200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars. Also, I don't think you understand sociopathy. The 1/20 figure you cited should be 1/25, which refers to the DSM's "antisocial personality disorder;" sociopathy is a deficit in moral reasoning, which is very different from being a person who's just waiting to become a minion to some dictator.
0wedrifid11y
For a start, I don't believe you. People have done comparable things for tyrants in the past (complete albeit probably inefficient dedication of the resources of the given tribe to the objectives of the tyrant---horseback archers and small moustaches spring to mind). But that isn't the primary problem here. The primary problem would be with a country creating the army in the usual way that a country creates an army but that once owned this army would be much easier for an individual (or a few) to control. It makes it easier for such people to become tyrants and once they are to retain their power. This kind of thing (a general seizing control by use of his control of the military) is not unusual for humans. Killer robots make it somewhat easier. Controlling many humans is complicated and unreliable.
0Epiphany11y
There are so many ways that a tyrant could end up with a robot army. Don't let's pretend that that's the only way. Here are a few: 1. A country is in turmoil and a leader comes along who makes people feel hope. The people are open to "lesser evil" propositions and risk-taking because they are desperate. They make irrational decisions and empower the wrong person. Hitler is a real life actual example of this happening. 2. A leader who is thought of as "good" builds a killer robot army. Then, realizing that they have total power over their people corrupts them and they behave like a tyrant, effectively turning into an oppressive dictator. 3. Hypothetical scenario: The setting is a country with presidential elections (I choose America for this one). Hypothetically, in this scenario we'll say the technology to do this was completely ready to be exploited. So the government begins to build a killer robot army. Hypothetically, a good president happens to be in office, so people think it's okay. We'll say that president gets a second term. Eight years pass, and a significant killer robot army is created. It's powerful enough to kill every American. Now, it's time to change the president. Maybe the American people choose somebody with their best interests in mind. Maybe they choose a wolf in sheep's clothing, or a moron who doesn't understand the dangers. It's not like we haven't elected morons before and it isn't as if entire countries full of people have never empowered anyone dangerous. I think it's reasonable to say that there's at least a 5% chance that each election will yield either a fatally moronic person, an otherwise good person who is susceptible to being seriously corrupted if given too much power, someone with a tyrant's values/personality, or a sociopath. If you're thinking to yourself "how many times in American history have we seen a president go corrupt by power" consider that there have been checks and balances in place to prevent them from having e
-4Randaly11y
It seems to me like you're outlining four different scenarios: 1) The United States, or another major power, converts from manned to unmanned weapons of war. A military coup is impossible today because soldiers won't be willing to launch one; were soldiers to be replaced by robots, they could be ordered to. 2) Another state develops unmanned weapons systems which enable it to defeat the United States. 3) A private individual develops unmanned weapons systems which enable them to defeat the United States. 4) Another state which is already a dictatorship develops unmanned weapons systems which alow the dictator to remain in power. My interpretation of your original comment was that you were arguing for #3; that is the only context in which hiring sociopaths would be relevant, as normal weapons development clearly doesn't require hiring a swarm of sociopathic engineers. The claim that dictatorships exclusively or primarily rely on sociopaths is factually wrong. e.g. according to data from Order Police Battalion 101, 97% of an arbitrary sample of Germans under Hitler were willing to take guns and mow down civilians. Certainly, close to 100% of an arbitrary sample of people would be willing to work on developing robots for either the US or any other state- we can easily see this today. If you were arguing for #2, then my response would be that the presence of unmanned weapons systems wouldn't make a different one way or another- if we're positing another state able to outdevelop, then defeat, the US, it would presumably be able to do so anyways. The only difference would be if it had an enormous GDP but low population; but such a state would be unlikely to be an aggressive military dictatorship, and, anyways, clearly doesn't exist. For #4, current dictatorships are too far behind in terms of technological development for unmanned weapons systems to have a significant impact- what we see today is that the most complex weapons systems are produced in a few. mostly s
0Epiphany11y
Technologies become less expensive over time, and as we progress, our wealth grows. If we don't have the money to produce it at the current cost, that doesn't mean they'll never be able to afford to do it. You didn't specify a time period - should I assume that's yearly? Also, do they have to pay $6,000 in maintenance costs while the units are in storage? Okay, so an MQ-1 is really, really expensive. Thank you. What is "serious numbers"? What do you mean by "above that, never"? Sorry I didn't get to your other points today. I don't have enough time. P.S. How did you get these estimates for when unmanned weapons will come out?

Possible Solution: Legislation to ban lethal autonomy. (Suggested by Daniel Suarez, please do not confuse his opinion of whether it is likely to work with mine. I am simply listing it here to encourage discussion and debate.)

3Epiphany11y
Con: Since I am concerned about how enforceable this rule is, and don't see a reason to trust that governments will resist temptation to break this rule, I am concerned that it may create a false sense of security.
0Epiphany11y
Pro: Passing a law would probably generate news stories and may make the public more aware of the problem, increasing the chances that someone solves the problem.
0Epiphany11y
Pro: Passing a law is likely to spread the word to the people in the military, some of whom may then have key ideas for preventing issues.
0Epiphany11y
Pro: Passing a law would make it more likely that the legislative branch of the government is aware of the peril it's in.
0Epiphany11y
Pro: This might delay disaster long enough for better solutions to come along.
0Epiphany11y
Con: If the executive branch of the government has the ability to make these weapons, the legislative branch will no longer pose a threat to them. Legally, they'll be forbidden, but practically speaking, they will not be prevented. Laws don't prevent people from behaving badly, nor do they guarantee that bad behavior will be punished, they just specify consequences and define the bad behavior. The consequences are contingent upon whether the person is caught and whether the authorities have enough power to dole out a punishment. In the event that the lawbreaker gains so much power that the authorities can't stop them, the threat of punishment is N/A. A law can't solve the checks and balances issue.
0Epiphany11y
Con: If militaries come to believe that having killer robots is critical to national defense (either because their enemies are posing a major threat, or because they're more effective than other strategies or required as a part of an effective strategy) then they will likely oppose this law or refuse to follow it. Even if they manage to resist the temptation to build them as a contingency plan against risks, if they're ever put into a position where there's an immediate threat (for instance: choosing between death and lethal autonomy), they are likely to choose lethal autonomy. It may be impossible to keep them from using these as a weapon in that case, making the ban on lethal autonomy just another ineffectual rule. If the consequences of breaking a rule are not as grave as the consequences of following it, then the rule isn't likely to be followed.
-1Epiphany11y
Con: They say about banning guns that it doesn't keep the bad people from having weapons, it just keeps good people unarmed. I'm concerned that the same may be true of laws that intentionally reduce the effectiveness of one's warfare technology.

The barriers to entry in becoming a supervillan are getting lower and lower- soon just anybody will be able to 3D print an army of flying killer robots with lethal autonomy.

I think that the democracy worries are probably overblown. I'd be more worried about skyrocketting collateral damage.

It seems like a well publicized notarious event where a lethally autonomous robot killed a lot of innocent people would significantly broaden the appeal of friendliness research, and even could lead to disapproval of AI technology, similar to how Chernobyl had a significant impact on the current widespread disapproval of nuclear power.

For people primarily interested in existential UFAI risk, the likeliness of such an event may be a significant factor. Other significant factors are:

  • National instability leading to a difficult environment in which to do research

  • National instability leading to reckless AGI research by a group in attempt to gain an advantage over other groups.

0Pentashagon11y
Like this? Interestingly, it's alleged that the autonomous software may not have been the (direct) cause of the failure but that undetected mechanical failure led to the gun continuing to fire without active aiming.
0hylleddin11y
Yes, but on a much larger scale. Or possibly just a more dramatic scale. Three mile island had a significant effect on public opinion even without any obvious death toll.
0Epiphany11y
I sincerely hope that the people have time to think this out before such an event occurs. Otherwise, their reaction may trigger the "cons" posted in the legislation suggestion.

Possible Solution: Using 3-D printers to create self-defense technologies that check and balance power.

Con: Everybody will probably die. This solution magnifies instability in the system. One person being any one of insane, evil or careless could potentially create an extinction event. At the very least they could cause mass destruction within a country that takes huge efforts to crush.

-4Epiphany11y
I agree that it's possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome. Considering the fact that governments will probably have large numbers of these, or comparable weapons before the people do, or that they will create comparable weapons in the event that they observe their populace building weapons using 3-D printers, I think it's more likely that the power that the people wield via killer robots (including criminal organizations) will be kept in check than that any of these groups will be able to rove around and kill everyone. Perhaps you envision a more complex chain of events unfolding? Do you expect a clusterfuck? Or is there some other course that you think things would take? What and why?
0wedrifid11y
We are considering a scenario where technology has been developed and disseminated sufficiently to allow Joe Citizen to produce autonomous killer robots with his home based general purpose automated manufacturing device. People more intelligent, educated, resourceful and motivated than Joe Citizen are going to be producing things even more dangerous. And produce things that produce things that... I just assume that kind of environment is not stable.
0Epiphany11y
Ok, so it's not the killer robots you envision killing off humanity, it's the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?
0wedrifid11y
In particular the technologies being used to create killer robots and so necessarily around at the time. Sufficiently general small scale but highly complex manufacturing capability combined with advanced mobile automation. The combination is already notorious).
-2Epiphany11y
You know, we've invented quite a few weapons over time and have survived quite a few "replicators" (the black death will be my #1 example)... we're not dead yet and I'm wondering if there are some principles keeping us alive which you and I have overlooked. For a shot at what those could be: 1) Regarding self-replicators: * Self-replicators make near perfect copies of themselves and so they are optimized to work in most, but not all situations. This means that there's a very good chance that at least some of a given species will survive whatever the self-replicators are doing. * Predators strike prey as terrifying, but their weakness is that they depend on the prey. Predators of all kinds die when they run out of prey. Some prey probably always hides, so unless the predator is really intelligent, it is likely that some prey will survive and will get a break from the predators, which they can use to develop strategies. 2) Regarding weapons: * For this discussion, we've been talking almost exclusively about offensive weapons. However, governments create defenses as well - probably, they often do this with the intent of countering their own offensive weapons. I don't know much about what sorts of defensive weapons there could be in the future, do you? If not, this lack of info about defensive weapons might be causing us to exaggerate the risk of offensive weapons. * Governments must value defense, or else they would not invest in it and would instead take those resources and put them into offense. Looking at it this way, I realize that offense is slowed down by defense, and/or there may be a certain ratio of defensive power to offensive power that is constantly maintained due to the fact that it's an intelligent agent that's creating these and they're motivated to have both offense and defense. If defense keeps pace with offense for this or any other reason (maybe reasons having to do with the insights that technological advancement provides) then there may
2ikrase11y
You people have got to get over your 3d printer obsessions. The effect is minimal. A person capable of building actually dangerous drones would just use lathes and mills.
0Epiphany11y
Pro: Checking and balancing power is a solution we've used in the past. We know that it can work.
0Epiphany11y
Con: If power were checked and balanced perfectly, right from the beginning, then stasis would be maintained. However, this may not be what's likely. We may see a period full of power struggles where large numbers of people are unprotected and factions like organized crime groups, oppressive governments or citizens with tyrannical ambitions rise up and behave as feudal lords.
0Kawoomba11y
Is this like a one-woman topic, complete with discussion? A finished product? I think 3-D printers that counterbalance death from above are ... a ways off.

Is this like a one-woman topic, complete with discussion? A finished product?

Or perhaps it is merely a different way of formatting a discussion post, with the evident intention of making it easier to organise replies. As an experimental posting style this solution has, shall we say, pros and cons.

0Epiphany11y
No. It just looks that way because I just started it. Please contribute your thoughts.

Don't there exist weapons that already exhibit the property of "lethal autonomy" - namely, land mines?

8Epiphany11y
That's not even comparable. Consider this: * Land mines don't distinguish between your allies and your enemies. * Land mines don't move and people can avoid them. Unless your enemy is extremely small and/or really terrible at strategy, you can't win a war with land mines. On the other hand, these killer robots can identify targets, could hunt people down by tracking various bits of data (transactions, cell phone signals, etc), could follow people around using surveillance systems, and can distinguish between enemies and allies. With killer robots, you could conceivably win a war.
6wedrifid11y
Basically, no. Being a trigger that blows up when stepped on isn't something that can realistically be called autonomy.
0CronoDAS11y
::points to exhibit of plucked chicken wearing "I'm a human!" sign:: Well, yeah, it's a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on "lethal autonomy" should encompass land mines as well? A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?
1Epiphany11y
Now I'm not sure whether you were (originally) trying to start a discussion about how the term "lethal autonomy" should be used, or if you intended to imply something to the effect of "lethal autonomy isn't a new threat, therefore we shouldn't be concerned about it". Even if I was wrong in my interpretation of your message, I'm still glad I responded the way I did - this is one of those topic where it's best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk. Do you view lethally autonomous robots as a potential threat to freedom and democracy?
4CronoDAS11y
I dunno. I'm just a compulsive nitpicker.
0Epiphany11y
Lol. Well thank you for admitting this.
0wedrifid11y
Yes. But I wouldn't expect it to come up too often as a sincere question.
-1JoshuaFox11y
Or the pit-trap: Lethal autonomy that goes back to the Stone Age :-)
0CronoDAS11y
And deliberately set wildfires.

Possible Solution:

This sounds hard to implement because it would require co-operation from a lot of people, but if the alternative is that our technological progress means we are facing possible extinction (with the 3-D printer solution) or oppression (with the legislation "solution"), that might get most of the world interested in putting the effort into it.

Here's how I imagine it could work:

  1. First, everyone concerned forms an alliance. This would have to be a very big alliance all over the world.

  2. The alliance makes distinctions between we

... (read more)