Do (incremental) advances in military technology actually change the number of people who die in wars? They might change which people die, or how rapidly, but it seems to me that groups of people who are determined to fight each other are going to do it regardless of what the "best" weapons currently available happen to be. The Mongols wreaked havoc on a scale surpassing World War I with only 13th century technology, and the Rwandan genocide was mostly carried out with machetes. World War I brought about a horror of poison gas, but bullets and explosions don't make people any less dead than poison gas does.
(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)
That is very much the right question to ask. How can we best find the answer?
Perhaps a timeline of major wars, together with the casualty figures (both as raw numbers, and as a percentage of estimated combatants) would provide that answer.
Hmmm... of the top ten wars by death toll, according to a Wikipedia list self-described as incomplete, the range of deaths-per-war ranged from 8-12 million (no. 10) to 60-78 million (no. 1, WWII). This is about an eightfold difference. The second war on the list is the 13th-century Mongols, and the earliest on the list is the Warring States Era, in China, in around 400 B.C. (10 million, estimated, 9th on the list).
Glancing over the data, I notice that most of the wars in that list are either world-spanning, or took place in China. This, I imagine, is most likely because China has a large population; thus, there are more people to get involved in, and killed in, a war. A list rearranged by percentage of involved soldiers killed might show a different trend.
I also notice that there is a very wide range of dates; but the century with the most entries in that top-ten list is the twentieth century. That may be influenced by the fact that there were mor...
Good insight.
No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology.
As for the casualty rate of soldiers, that tends to jump up whenever a new type of weapon is introduced and then fall again as tactics change to deal with it. In the long run the dominant factor is again a matter of ideology - an army that tries to minimize casualties can generally do so, while one that sees soldiers as expendable will get them killed in huge numbers regardless of technology.
(BTW, WWI gases are nothing unusual in the crippling injury department - cannons, guns, explosives and edged weapons all have a tendency to litter the battlefield with crippled victims as well. What changed in the 20th century was that better medical meant a larger fraction of crippled soldiers to survive their injuries to return to civilian life.)
(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)
Bullets and explosions don't necesarily kill.
In 1948 Norbert Wiener, in the book Cybernetics: Or the Control and Communication in the Animal and the Machine, said: "Prefrontal lobotomy... has recently been having a certain vogue, probably not unconnected with the fact that it makes the custodial care of many patients easier. Let me remark in passing that killing them makes their custodial care still easier."
A good article, but one thing that sticks out of me is the overall ineffectiveness of these scientists at preventing the actual use of their technology. Only the recombinant DNA experiment was stopped before actually being carried out.
This may be partly because technologies that were used are more conspicuous. We would know if Napier designed a better cannon, but we don't know how much he delayed the development of artillery by concealing his results.
Right, there's a survivorship bias. You're not going to hear about scientists who successfully prevented anyone from learning about their terrible discoveries (because to be really successful they'd also need to prevent anyone from learning that they'd prevented anyone from learning about their terrible discoveries).
My nitpick is the vague and spooky description of Paul Berg's research. The surrounding tone is great, but this little bit ends up sounding bad.
Current:
Paul Berg (1926-), who carried out part of an experiment (like what?) which would, if completed, have created a potentially carcinogenic (vague) strain of a common gut bacteria (just say E. coli) which could have spread to human beings (Say what relevant people at the time thought, "could have" implies false danger). Due to the concerns of other scientists, he put the final part of the experiment on hold, and called for more attention to the risks of such research as well as a temporary moratorium.
Preferred:
Paul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium.
Thanks, that's indeed better. I've replaced it with your version. (The original was vague probably because there were several conflicting accounts of what exactly happened, with e.g. different sources putting the time of the experiments to 1971, 1973 and 1974, and then I got kinda frustrated with the thing and did the write-up pretty vaguely.)
Jared Diamond, in Guns Germs and Steel, argues that when the time is ripe scientific discoveries are made quite regardless of who makes them, give or take a few decades. Most discoveries are incremental, and many are made by multiple people simultaneously. So wouldn't a discovery that isn't published be just made elsewhere in a few years time, possibly by someone without many ethical concerns?
Even a few years of delay can make a big difference if you are in the middle of a major war. If Galston hadn't published his results and they weren't found until a decade or two later, the US probably wouldn't have used Agent Orange in Vietnam. Similarly with chlorine gas in WWI, atomic bombs in WWII, etc. Granted, delaying the invention doesn't necessarily make the overall outcome better. If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.
Asimov once wrote a short story - "A Feeling of Power" - on the subject of misusing technology for evil ends. In the story, set in the far future, a man rediscovers basic mathematics (without a computer) - possibly the most innocent of possible advances - and, once he sees what horrors this invention will lead to, kills himself.
A lot of technology (I'd even say most technology) is two-pronged - it can be used for good or evil. Nuclear power or nuclear bombs. Creating disease or destroying disease. The products of technology are not immune to this; the Archimedean screw can irrigate a field or flood it. Dynamite can be used in excavation or as a weapon.
So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course). This is mainly because our current society rests on the public research and discoveries of countless people throughout history; I would prefer that future societies should be even better than our current society, and the best way that I see to ensure that is by giving future societies a greater base of knowledge to draw from.
There are several ways to nullify, or even reverse progress:
There would have to be extremely good reason to try one of the top two; since they involve not only removing results, but actually poison the well for future researchers.
Stylistic note - you use the Leonardo da Vinci submarine example twice in three paragraphs without acknowledgement that you just used it.
Thanks for writing this! I like being able to share LW material with my friends that doesn't trigger philosophical landmines.
My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.
You are missing a major problem. Not "secrecy will kill progress" That is, in this context, a lesser problem. The major problem is that scientific secrecy would eventually kill the planet.
In a context of ongoing research and use of any discipline, Dangerous techniques must be published, or they will be duplicated over and over again, until they cause major damage. If the toxicity of Dimethylmercury was a secret, chemical laboratories and entire college campuses dying slowly, horrifically and painfully would be regular occurrences. No scientific...
Better than developing ethical scientists would be a policy of developing ethical political and military leaders.
Since "this post is arguing for secrecy in particular being a good thing" seems to be a common misunderstanding of the intent of the post, I deleted the mention of hiding one' work from the opening paragraph, as well as added a paragraph explicitly saying that we're not saying that any particular way of taking responsibility is necessarily the correct one.
A scientist who shares a potentially harmful invention with the rest of the world might not necessarily lack ethical concern. If I invented the knife, I could choose to think that sharing it with others would increase the probability of random stabbings and accidental self-inflicted injury (very bad), or I could choose to focus on the fact that it would be an extremely useful tool in everyday life (very good).
Ron Arkin might also belong on the list.
From Robots at War: Scholars Debate the Ethical Issues:
...“I was very enthralled with the thrill of discovery and the drive for research and not as much paying attention to the consequences of, ‘If we answer these questions, what’s going to happen?’” [roboticist Ronald Arkin] says. What was going to happen soon became apparent: Robotics started moving out of the labs and into the military-industrial complex, and Mr. Arkin began to worry that the systems could eventually be retooled as weaponized “killing machines full
Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
The reasoning is that if you discover something which could have potenti...
was granted a patent for the atomic bomb in 1934
I think you mean "for the nuclear chain reaction."
I can tell I won't like Bill Joy's article. He can do what he wants to, but I don't see how "humanity" is a good argument against a robotic future. Isn't it a bit presumptuous to assume that all humans are content to remain human, assuming they even like being human all that much?
Thanks, insightful post. I find the research a bit patchy. Only on the atomic bomb there is vast literature since the 1950ies, even in popular fiction - and a couple of crucial names like Oppenheimer (vs. Teller), the Russell–Einstein Manifesto or v. Weizsäcker are absent here.
Even if keeping research secret in our modern world was feasible, I don't believe it would be desirable. I would argue that humanity has benefited tremendously from chemistry, modern physics, genetics, and informatics. The problem is that knowledge is amoral. The same knowledge that allows you to build a bomb also allows you to build a nuclear reactor. The same compiler that you use to create the latest communication protocols also allows you to create destructive computer viruses. There's no way of keeping one and discarding the other; and, on the whole, we are IMO better off with computers and other such things than we are without them.
Standing against unintended pandemics, atomic warfare and other extinction threatenting events have been quite good of an idea in retrospect. Those of us working of scientific advances shall indeed ponder the consequences.
But Immerwahr-Haber episode is just an unrelated tearjerker. Really, inventing process for creation of nitrogen fertilizers is so more useful than shooting oneself in the heart. Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.
Here's what I meant by saying you were begging the question: you were assuming the outcome (few people would be killed by chemical warfare after WW1) did not depend on the protests against chemical weapons.
You said originally that protesting against chemical warfare (CW) during WW1 was not worth the sacrifice involved, because few people were killed by CW after WW1.
But the reason few people were killed is that CW was not used often. And one contributing factor to its not being used was that people had protested its use in WW1, and created the Geneva Convention.
People who protested CW achieved their goal in reducing the use of CW. So the fact CW was not used much and killed few people, is not evidence that the protest was in vain - to the contrary, it's exactly what you would expect to see if the protest was effective.
Why are some of your links triggering scammish popups? Is it supposed to be some sort of humor?
Since the words you mention don't have any links, it's got to be your computer, or a proxy between your computer and LW.
-- Norbert Wiener (1956), Moral Reflections of a Mathematician.
Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. For someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous. Throughout history, many scientists and inventors have recognized this, and taken different kinds of action to help ensure that their work will have beneficial consequences. Here are some of them.
This post is not arguing that any specific approach for taking responsibility for one's actions is the correct one. Some researchers hid their work, others refocused on other fields, still others began active campaigns to change the way their work was being used. It is up to the reader to decide which of these approaches were successful and worth emulating, and which ones were not.
Pre-industrial inventors
-- Leonardo da Vinci
People did not always think that the benefits of freely disseminating knowledge outweighed the harms. O.T. Benfey, writing in a 1956 issue of the Bulletin of the Atomic Scientists, cites F.S. Taylor’s book on early alchemists:
With the Renaissance, science began to be viewed as public property, but many scientists remained cautious about the way in which their work might be used. Although he held the office of military engineer, Leonardo da Vinci (1452-1519) drew a distinction between offensive and defensive warfare, and emphasized the role of good defenses in protecting people’s liberty from tyrants. He described war as ‘bestialissima pazzia’ (most bestial madness), and wrote that ‘it is an infinitely atrocious thing to take away the life of a man’. One of the clearest examples of his reluctance to unleash dangerous inventions was his refusal to publish the details of his plans for submarines.
Later Renaissance thinkers continued to be concerned with the potential uses of their discoveries. John Napier (1550-1617), the inventor of logarithms, also experimented with a new form of artillery. Upon seeing its destructive power, he decided to keep its details a secret, and even spoke from his deathbed against the creation of new kinds of weapons.
But only concealing one discovery pales in comparison to the likes of Robert Boyle (1627-1691). A pioneer of physics and chemistry and possibly the most famous for describing and publishing Boyle’s law, he sought to make humanity better off, taking an interest in things such as improved agricultural methods as well as better medicine. In his studies, he also discovered knowledge and made inventions related to a variety of potentially harmful subjects, including poisons, invisible ink, counterfeit money, explosives, and kinetic weaponry. These ‘my love of Mankind has oblig’d me to conceal, even from my nearest Friends’.
Chemical warfare
By the early twentieth century, people had began looking at science in an increasingly optimistic light: it was believed that science would not only continue to improve everyone’s prosperity, but also make war outright impossible. But as science became more sophisticated, it would also become possible to cause ever more harm with ever smaller resources. One of the early indications of science’s ability to do harm came from advances in chemical warfare, and World War I saw the deployment of chlorine, phosgene, and mustard gas as weapons. It should not be surprising, then, that some scientists in related fields began growing concerned. But unlike earlier inventors, at least three of them did far more than just refuse to publish their work.
Clara Immerwahr (1870-1915) was a German chemist and the first woman to obtain a PhD from the University of Breslau. She was strongly opposed to the use of chemical weapons. Married to Fritz Haber, ‘the father of chemical warfare’, she unsuccessfully attempted many times to convince her husband to abandon his work. Immerwahr was generally depressed and miserable over the fact that society considered a married woman’s place to be at home, denying her the opportunity to do science. In the end, after her efforts to dissuade her husband from working on chemical warfare had failed and Fritz had personally overseen the first major use of chlorine, she committed suicide by shooting herself in the heart.
Poison gas also concerned scientists in other disciplines. Lewis Fry Richardson (1881-1953) was a mathematician and meteorologist. During the World War II, the military became interested in his work on turbulence and gas mixing, and attempted to recruit him to do help them do work on modeling the best ways of using poison gas. Realizing what his work was being used for, Richardson abandoned meteorology entirely and destroyed his unpublished research. Instead, he turned his research to investigating the causes of war, attempting to find ways to reduce the risk of armed conflict. He spent the rest of his life devoted to this topic, and is today considered one of the founders of the scientific analysis of conflict.
Arthur Galston (1920-2008), a botanist, was also concerned with the military use of his inventions. Building upon his work, the US military developed Agent Orange, a chemical weapon which was deployed in the Vietnam War. Upon discovering what his work had been used for, he began to campaign against its use, and together with a number of others finally convinced President Nixon to order an end to its spraying in 1970. Reflecting upon the matter, Galston wrote:
After retiring in 1990, he founded the Interdisciplinary Center for Bioethics at Yale, where he also taught bioethics to undergraduates.
Nuclear weapons
While chemical weapons are capable of inflicting serious injuries as well as birth defects on large numbers of people, they have never been viewed to be as dangerous as nuclear weapons. As physicists became capable of creating weapons of unparalleled destructive power, they also began growing ever more concerned about the consequences of their work.
Leó Szilárd (1898-1964) was one of the first people to envision nuclear weapons, and was granted a patent for the nuclear chain reaction in 1934. Two years later, he grew worried that Nazi scientists would find his patents and use them to create weapons, so he asked the British Patent Office to withdraw his patents and secretly reassign them to the Royal Navy. His fear of Nazi Germany developing nuclear weapons also made him instrumental in making the USA initiate the Manhattan Project, as he and two other scientists wrote the Einstein-Szilárd letter that advised President Roosevelt of the need to develop the same technology. But in 1945, he learned that the atomic bomb was about to be used on Japan, despite it being certain that neither Germany nor Japan had one. He then did his best to stop them from being used and started a petition against using them, with little success.
After the war, he no longer wanted to contribute to the creation of weapons and changed fields to molecular biology. In 1962, he founded the Council for a Livable World, which aimed to warn people about the dangers of nuclear war and to promote a policy of arms control. The Council continues its work even today.
Another physicist who worked on the atomic bomb due to a fear of it being developed by Nazi Germany was Joseph Rotblat (1908-2005), who felt that the Allies also having an atomic bomb would deter the Axis from using one. But he gradually began to realize that Nazi Germany would likely never develop the atomic bomb, destroying his initial argument for working on it. He also came to realize that the bomb continued to be under active development due to reasons that he felt were unethical. In conversation, General Leslie Groves mentioned that the real purpose of the bomb was to subdue the USSR. Rotblat was shocked to hear this, especially given that the Soviet Union was at the time an ally in the war effort. In 1944, it became apparent that Germany would not develop the atomic bomb. As a result, Rotblat asked for permission to leave the project, and was granted it.
Afterwards, Rotblat regretted his role in developing nuclear weapons. He believed that the logic of nuclear deterrence was flawed, since he thought that if Hitler had possessed an atomic bomb, then Hitler’s last order would have been to use it against London regardless of the consequences. Rotblat decided to do whatever he could to prevent the future use and deployment of nuclear weapons, and proposed a worldwide moratorium on such research until humanity was wise enough to use it without risks. He decided to repurpose his career into something more useful for humanity, and began studying and teaching the application of nuclear physics into medicine, becoming a professor at the Medical College of St Bartholomew’s Hospital in London.
Rotblat worked together with Bertrand Russell to limit the spread of nuclear weapons, and the two collaborated with a number of other scientists to issue the Russell-Einstein Manifesto in 1955, calling the governments of the world to take action to prevent nuclear weapons from doing more damage. The manifesto led to the establishment of the Pugwash Conferences, in which nuclear scientists from both the West and the East met each other. By facilitating dialogue between the two sides of the Cold War, these conferences helped lead to several arms control agreements, such as the Partial Test Ban Treaty of 1963 and the Non-Proliferation Treaty of 1968. In 1995, Rotblat and the Pugwash Conferences were awarded the Nobel Peace Prize “for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms”.
The development of nuclear weapons also affected Norbert Wiener (1894-1964), professor of mathematics at the Massachusetts Institute of Technology and the originator of the field of cybernetics. After the Hiroshima bombing, a researcher working for a major aircraft corporation requested a copy of an earlier paper of Wiener’s. Wiener refused to provide it, and sent Atlantic Monthly a copy of his response to the researcher, in which he declared his refusal to share his research with anyone who would use it for military purposes.
Recombinant DNA
For a large part of history, scientists’ largest ethical concerns came from direct military applications of their inventions. While any invention could lead to unintended societal or environmental consequences, for the most part researchers who worked on peaceful technologies didn’t need to be too concerned with their work being dangerous by itself. But as biological and medical research obtained the capability to modify genes and bacteria, it would open up the possibility of unintentionally creating dangerous infectious diseases. In theory, these could be even more dangerous than nuclear weapons - an a-bomb dropped on a city might destroy most of that city, but a single bacteria could give rise to an epidemic infecting people all around the world.
Recombinant DNA techniques involve taking DNA from one source and then introducing it to another kind of organism, causing the new genes to express themselves in the target organism. One of the pioneers of this technique was Paul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium. This eventually led to two conferences in Asilomar, with 140 experts participating in the later 1975 one to decide upon guidelines for recombinant DNA research.
Berg and Pollack were far from the only scientists to call attention to the safety concerns of recombinant DNA. Several other scientists contributed, asking for more safety and voicing concern about a technology that could bring harm if misused.
Among them, the molecular biologist Maxine Singer (1931-) chaired the 1973 Gordon Conference on Nucleic Acids, in which some of the dangers of the technique were discussed. After the conference, she and several other similarly concerned scientists authored a letter to the President of the National Academy of Science and the President of the Institutes of Health. The letter suggested that a study committee be established to study the risks behind the new recombinant DNA technology, and propose specific actions or guidelines if necessary. She also helped organize the Asilomar Conference in 1975.
Informatics
-- Bill Joy, Why the Future Doesn’t Need Us.
Finally, we come to the topic of information technology and artificial intelligence. As AI systems grow increasingly autonomous, they might become the ultimate example of a technology that seems initially innocuous but ends up capable of doing great damage. Especially if they were to become capable of rapid self-improvement, they could lead to humanity going extinct.
In addition to refusing to help military research, Norbert Wiener was also concerned about the effects of automation. In 1949, General Electric wanted him to advise its managers on automaton matters and to teach automation methods to its engineers. Wiener refused these requests, believing that they would further a development which would lead to human workers becoming unemployed and replaced by machines. He thus expanded his boycott of the military to also be a boycott of corporations that he thought acted unethically.
Wiener was also concerned about the risks of autonomous AI. In 1960, Science published his paper "Some Moral and Technical Consequences of Automation", in which he spoke at length about the dangers of machine intelligence. He warned that machines might act far too fast for humans to correct their mistakes, and that like genies in stories, they could fulfill the letter of our requests without caring about their spirit. He also discussed such worries elsewhere.
Such worries would continue to bother other computer scientists as well, many decades after Wiener’s death. Bill Joy (1954-) is known for having played a major role in the development of BSD Unix, having authored the vi text editor, and being the co-founder of Sun Microsystems. He became concerned about the effects of AI in 1998, when he met Ray Kurzweil at a conference where they were both speakers. Kurzweil gave Joy a preprint of his then-upcoming book, The Age of Spiritual Machines, and Joy found himself concerned over its discussion about the risks of AI. Reading Hans Moravec’s book Robot: Mere Machine to Transcendent Mind exacerbated Joy’s worries, as did several other books which he found around the same time. He began to wonder whether all of his work in the field of information technology and computing had been preparing the way for a world where machines would replace humans.
In 2000, Joy wrote a widely-read article titled Why the Future Doesn’t Need Us for Wired, talking about the dangers of AI as well as genetic engineering and nanotechnology. In the article, he called to limit the development of technologies which he felt were too dangerous. Since then, he has continued to be active in promoting responsible technology research. In 2005, an op-ed co-authored by Joy and Ray Kurzweil was published in the New York Times, arguing that the decision to publish the genome of the 1918 influenza virus on the Internet had been a mistake.
Joy also attempted to write a book on the topic, but then became convinced that he could achieve more by working on science and technology investment. In 2005, he joined the venture capital firm Kleiner Perkins Caufield & Byers as a partner, and he has been focused on investments in green technology.
Conclusion
Technology's potential for destruction will only continue to grow, but many of the social norms of science were established under the assumption that scientists don’t need to worry much about how the results of their work are used. Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.
Sources used
This article was written based on research done by Vincent Fagot. The sources listed below are in addition to any that are already linked from the text.
Leonardo da Vinci:
John Napier:
Robert Boyle:
Clara Immerwahr:
Lewis Fry Richardson:
Arthur Galston:
Leó Szilárd:
Joseph Rotblat:
Norbert Wiener:
Paul Berg, Maxine Singer, Robert Pollack: