At present, if one person chooses to, they can kill a few dozen to a few hundred people. As we discover new technologies, that number is, most likely, only going to go up - to the point where any given individual has the power to kill millions. (And this isn't a very distant future, either; it's entirely possible to put together a basement biology lab of sufficient quality to create smallpox for just a few thousand dollars.)

If we want to avoid human extinction, I can think of two general approaches. One starts by assume that humans are generally untrustworthy, and involves trying to keep any such technology out of peoples' hands, no matter what other possible benefits such knowledge may offer. This method has a number of flaws, the most obvious being the difficulty in keeping such secrets contained, another being the classic "who watches the watchers?" problem.

The other doesn't start with that assumption - and, instead, is to try to figure out what it takes to keep people from /wanting/ to kill large numbers of other people... a sort of "Friendly Human Problem". For example, we might start with a set of societies in which every individual has the power to kill any other at any moment, seeing what particular social norms allow people to at least generally get along with each other, and then encouraging those norms as the basis for when those people gain increasingly potentially-lethal knowledge.

Most likely, there will be (or already are) some people who try the first approach, and some who try the second - which seems very likely to cause friction when they rub against each other.

In the medium-to-long term, if we do establish viable off-Earth colonies, an important factor to consider is that once you're in Earth orbit, you're halfway to anywhere in the solar system; including to asteroids which can be nudged into Earth orbit to mine... or nudged to crash into Earth itself. Any individual who has the power to move around the solar system, such as to create a new self-sufficient colony somewhere (which, I've previously established to my own satisfaction, is the only way for humanity to survive a variety of extinction-level events), will have the power to kill billions. If sapience is to survive, we will /have/ to deal with people having lethal power undreamt of by today's worst tyrannical regimes - which would seem to make the first approach /entirely/ unviable.

 

Once people have such lethal power, I've been able to think of two stable end-points. The obvious one is that everyone ends up dead - a rather suboptimal result. The other... is if everyone who has such power is very careful to never be the /first/ one to use force against anyone else, thus avoiding escalation. In game theory terms, this means all the remaining strategies have to be 'nice'; in political terms, this is summed up as the libertarian "Non-Aggression Principle".

I think I need to think a bit more about some of the other lessons of game theory's Tit-for-Tat, such as the qualities of being retaliating, forgiving, and non-envious, and whether variations of the basic Tit-for-Tat, such as "Tit for two Tats" or "Tit for Tat with Forgiveness" would be better models. For example, the level of forgiveness that serves best might depend on the number of people who are still willing to initiate force compared to the number of people who try not to but occasionally make mistakes.

I'm also rather suspicious that my thinking on this particular issue leads me to a political conclusion that's reasonably close to (though not precisely) my existing beliefs; I know that I don't have enough practice with true rationality to be able to figure out whether this means that I've come to a correct result from different directions, or that my thoughts are biased to come to that conclusion whatever the input. I'd appreciate any suggestions on techniques for differentiating between the two.

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 10:07 PM

Game theory will only help with people who aren't crazy (if their utility function is the number of dead people, regardless of their own survival, then it is hard to concoct good deterrents).

Yea, the risk of superpowered serial killers is larger than the risk of superpowered terrorists.

Perhaps - but /some/ sort of analysis, whatever it is called, seems important to try here. For example, looking around at us today, the number of people whose "utility function is the number of dead people" is very small compared to those whose isn't, implying that it's more the exception than the rule - which at least raises the hope that if the causes of people acquiring such a utility function can be understood, then, perhaps, measures could be taken to prevent anyone from acquiring that utility function in the first place.

Or maybe such is impossible - but it seems a problem worth investigating, yesno?

I think we are in agreement (with your above comment, not the entire OP) unless I misunderstand what you are saying.

Trying to stop such people from existing is different from trying to use game theory to deter such people. I am fully in favor of your suggestion, but I think it is naieve to assume that we can deter everyone through things like mutually assured destruction, etc.

I will note that I think that what you suggest is very difficult, but I don't have any better ideas currently; the advantage of your suggestion is that partial progress buys us time (in that decreasing the number of crazy people decreases the likelihood that one of them has enough technical expertise to create powerful weapons).

The fallacy here is the classic one of postulating an individual with advanced technology from the distant future, and imagining what he could do in today's world, forgetting that such an individual will in fact necessarily be living in the future where other people have similar technology. It's the equivalent of saying "Spartacus could have conquered Rome, he just needed to use a tank, Roman legionaries didn't have any weapon that could penetrate tank armor". For example:

Any individual who has the power to move around the solar system, such as to create a new self-sufficient colony somewhere (which, I've previously established to my own satisfaction, is the only way for humanity to survive a variety of extinction-level events), will have the power to kill billions.

If one individual has the means to deflect an asteroid onto collision course with Earth, another individual - let alone a team, a company or a government - will have the means to deflect it away again. (For that matter, at that tech level, if an asteroid impact did take place it might be no more inconvenience than a heavy shower of rain is today.)

If one individual has the means to deflect an asteroid onto collision course with Earth, another individual - let alone a team, a company or a government - will have the means to deflect it away again.

I'm not so sure.

Once upon a time, if an animal wanted to kill another animal, the only way to do it was to actually go up to the victim and break their neck, tear their guts out, or the like. Once proto-humans came along, they (we) devised rock-throwing, spears, and other techniques of killing at a moderate distance. Later, bows and ballistae; still later, guns; and so on.

The fact that we have invented nukes does not mean that we have invented the ability to protect against nukes. The fact that we have invented Anthrax Leprosy Mu¹ does not mean that we have invented the ability to protect against it. In general, technology has frequently favored the attacker, such that modern geopolitics is dominated not by fortifications to render a defender immune from any attack, but by the threat of retaliation from the defender or defender's allies: mutually assured destruction.


¹ or any other biological warfare agent

Human intuition is indeed so constituted to find that line of argument persuasive, but reality differs. Genghis Khan killed more people than Hitler (even in absolute numbers, let alone as a fraction of people alive at the time). The Mongol sack of Baghdad killed more people than the bombings of Hamburg, Dresden, Tokyo, Hiroshima and Nagasaki all put together. Synthetic diseases add nothing to the picture; nature throws incurable diseases at us all the time. The 1918 flu killed more people in one year than all man's ingenuity had done in four. If SARS hadn't been stopped by quarantine, it would have killed more people than any human agency in history.

And yet the meme that individual power is the danger we must fear, may yet prove deadlier than all of those combined. No weapon, no disease, has by itself the power to extinguish the future. A sufficiently appealing and plausible sounding meme just might.

Genghis Khan killed more people than Hitler (even in absolute numbers, let alone as a fraction of people alive at the time).

[citiation needed]

Here is one of the more conservative estimates, putting 40 million on Genghis Khan's account, which suffices to establish the original claim. (The total for World War II is somewhat higher, but includes all theaters of the war - and is of course much smaller as a fraction of people alive at the time.) Higher end estimates are necessarily less precise, but I've seen it suggested that the Mongol invasion of China alone may have caused up to 60 million deaths (out of a total population of 120 million) once the famines resulting from disruption of agriculture are fully accounted for.

Thank you.

Modification of individual minds will inevitably be part of the "answer", for societies which have that power and which are genuinely threatened with annihilation. A simple unrealistic scenario: Imagine a city in space on the brink of a nanotechnological assembler revolution. Imagine a "Borg Party" who say the answer is for everyone to have regular brain scans to check for berserker tendencies, with forced neural pacification for people who show up as dangerous. Then imagine increasingly heated conflict between Borgists and antiBorgists, ending in armed struggle, victory of Borgists, and invasion of antiBorgist enclaves, followed by the forced enrolment of remaining antiBorgists in the brain-scan regime.

When people are genuinely threatened, they will do the previously unthinkable, if the unthinkable appears to be necessary. And it is surely inevitable that access to technologies which pose an extinction risk will only be permitted to entities which demonstrably won't use that power to destroy everyone. How could it be any other way?

And it is surely inevitable that access to technologies which pose an extinction risk will only be permitted to entities which demonstrably won't use that power to destroy everyone.

How could it be prevented? Are all science books to be hidden away from all people, all machine tools, all user-modifiable equipment, all transportation equipment, etc, etc? Or am I looking at your suggestion in the wrong way?

What I wrote was about a single polity dealing with the "threat from within" by compelling all its citizens with access to dangerous technology to undergo regular mental health checks, with compulsory mind modification if they appear to be a threat. This could be elaborated into a class system in which the lower class are exempt from the brain scans, but only have access to preindustrial technology. It could even become a principle of interstate relations and a motive for war - a way of dealing with the threat from outside a particular society: only trust those societies who have a society-wide ethical-brain-scan protocol in place, and introduce this system, by force, propaganda or subversion, into those societies which don't have it.

So the main idea is not to hide the dangerous technology, but rather to change human nature so you don't need to hide it. Of course, in the real world of the present we are rather far from having the ability to scan brains and detect sociopathy or dangerous private value systems, let alone the ability to alter personality or motivation in a clean, selective way (that doesn't impair or affect other aspects of the person).

At present, if one person chooses to, they can kill a few dozen to a few hundred people. As we discover new technologies, that number is, most likely, only going to go up.

This is an interesting claim, it's a plausible claim, and it's something that worries me too. But I'm not actually all that confident it's true. I think the main change in destructive capacity over the last 50 or 100 years is in bioengineering. But it might be that our defensive capacities are growing even more rapidly than our destructive ones. Is there a way we could measure this? Skilled work hours to create a pathogen versus design a vaccine for it?

[-][anonymous]13y-20

In the Culture, all sophisticated technology is intelligent and sentient. If you try something creative the drones will stop you anyway.

[This comment is no longer endorsed by its author]Reply