Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A brief history of ethically concerned scientists

68 Post author: Kaj_Sotala 09 February 2013 05:50AM

For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.

-- Norbert Wiener (1956), Moral Reflections of a Mathematician.


Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. For someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous. Throughout history, many scientists and inventors have recognized this, and taken different kinds of action to help ensure that their work will have beneficial consequences. Here are some of them.

This post is not arguing that any specific approach for taking responsibility for one's actions is the correct one. Some researchers hid their work, others refocused on other fields, still others began active campaigns to change the way their work was being used. It is up to the reader to decide which of these approaches were successful and worth emulating, and which ones were not.

Pre-industrial inventors

… I do not publish nor divulge [methods of building submarines] by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them.

-- Leonardo da Vinci


People did not always think that the benefits of freely disseminating knowledge outweighed the harms. O.T. Benfey, writing in a 1956 issue of the Bulletin of the Atomic Scientists, cites F.S. Taylor’s book on early alchemists:

Alchemy was certainly intended to be useful .... But [the alchemist] never proposes the public use of such things, the disclosing of his knowledge for the benefit of man. …. Any disclosure of the alchemical secret was felt to be profoundly wrong, and likely to bring immediate punishment from on high. The reason generally given for such secrecy was the probable abuse by wicked men of the power that the alchemical would give …. The alchemists, indeed, felt a strong moral responsibility that is not always acknowledged by the scientists of today.


With the Renaissance, science began to be viewed as public property, but many scientists remained cautious about the way in which their work might be used. Although he held the office of military engineer, Leonardo da Vinci (1452-1519) drew a distinction between offensive and defensive warfare, and emphasized the role of good defenses in protecting people’s liberty from tyrants. He described war as ‘bestialissima pazzia’ (most bestial madness), and wrote that ‘it is an infinitely atrocious thing to take away the life of a man’. One of the clearest examples of his reluctance to unleash dangerous inventions was his refusal to publish the details of his plans for submarines.

Later Renaissance thinkers continued to be concerned with the potential uses of their discoveries. John Napier (1550-1617), the inventor of logarithms, also experimented with a new form of artillery. Upon seeing its destructive power, he decided to keep its details a secret, and even spoke from his deathbed against the creation of new kinds of weapons.

But only concealing one discovery pales in comparison to the likes of Robert Boyle (1627-1691). A pioneer of physics and chemistry and possibly the most famous for describing and publishing Boyle’s law, he sought to make humanity better off, taking an interest in things such as improved agricultural methods as well as better medicine. In his studies, he also discovered knowledge and made inventions related to a variety of potentially harmful subjects, including poisons, invisible ink, counterfeit money, explosives, and kinetic weaponry. These ‘my love of Mankind has oblig’d me to conceal, even from my nearest Friends’.

Chemical warfare

By the early twentieth century, people had began looking at science in an increasingly optimistic light: it was believed that science would not only continue to improve everyone’s prosperity, but also make war outright impossible. But as science became more sophisticated, it would also become possible to cause ever more harm with ever smaller resources. One of the early indications of science’s ability to do harm came from advances in chemical warfare, and World War I saw the deployment of chlorine, phosgene, and mustard gas as weapons. It should not be surprising, then, that some scientists in related fields began growing concerned.  But unlike earlier inventors, at least three of them did far more than just refuse to publish their work.

Clara Immerwahr (1870-1915) was a German chemist and the first woman to obtain a PhD from the University of Breslau. She was strongly opposed to the use of chemical weapons. Married to Fritz Haber, ‘the father of chemical warfare’, she unsuccessfully attempted many times to convince her husband to abandon his work. Immerwahr was generally depressed and miserable over the fact that society considered a married woman’s place to be at home, denying her the opportunity to do science. In the end, after her efforts to dissuade her husband from working on chemical warfare had failed and Fritz had personally overseen the first major use of chlorine, she committed suicide by shooting herself in the heart.

Poison gas also concerned scientists in other disciplines. Lewis Fry Richardson (1881-1953) was a mathematician and meteorologist. During the World War II, the military became interested in his work on turbulence and gas mixing, and attempted to recruit him to do help them do work on modeling the best ways of using poison gas. Realizing what his work was being used for, Richardson abandoned meteorology entirely and destroyed his unpublished research. Instead, he turned his research to investigating the causes of war, attempting to find ways to reduce the risk of armed conflict. He spent the rest of his life devoted to this topic, and is today considered one of the founders of the scientific analysis of conflict.

Arthur Galston (1920-2008), a botanist, was also concerned with the military use of his inventions. Building upon his work, the US military developed Agent Orange, a chemical weapon which was deployed in the Vietnam War. Upon discovering what his work had been used for, he began to campaign against its use, and together with a number of others finally convinced President Nixon to order an end to its spraying in 1970. Reflecting upon the matter, Galston wrote:

I used to think that one could avoid involvement in the antisocial consequences of science simply by not working on any project that might be turned to evil or destructive  ends. I have learned that things are not all that simple, and that almost any scientific finding can be perverted or twisted under appropriate societal pressures. In my view, the only recourse for a scientist concerned about the social consequences of his work is to remain involved with it to the end. His responsibility to society does not cease with publication of a definitive scientific paper. Rather, if his discovery is translated into some impact on the world outside the laboratory, he will, in most instances, want to follow through to see that it is used for constructive rather than anti-human purposes.


After retiring in 1990, he founded the Interdisciplinary Center for Bioethics at Yale, where he also taught bioethics to undergraduates.

Nuclear weapons

While chemical weapons are capable of inflicting serious injuries as well as birth defects on large numbers of people, they have never been viewed to be as dangerous as nuclear weapons. As physicists became capable of creating weapons of unparalleled destructive power, they also began growing ever more concerned about the consequences of their work.

Leó Szilárd (1898-1964) was one of the first people to envision nuclear weapons, and was granted a patent for the nuclear chain reaction in 1934. Two years later, he grew worried that Nazi scientists would find his patents and use them to create weapons, so he asked the British Patent Office to withdraw his patents and secretly reassign them to the Royal Navy. His fear of Nazi Germany developing nuclear weapons also made him instrumental in making the USA initiate the Manhattan Project, as he and two other scientists wrote the Einstein-Szilárd letter that advised President Roosevelt of the need to develop the same technology. But in 1945, he learned that the atomic bomb was about to be used on Japan, despite it being certain that neither Germany nor Japan had one. He then did his best to stop them from being used and started a petition against using them, with little success.

After the war, he no longer wanted to contribute to the creation of weapons and changed fields to molecular biology. In 1962, he founded the Council for a Livable World, which aimed to warn people about the dangers of nuclear war and to promote a policy of arms control. The Council continues its work even today.

Another physicist who worked on the atomic bomb due to a fear of it being developed by Nazi Germany was Joseph Rotblat (1908-2005), who felt that the Allies also having an atomic bomb would deter the Axis from using one. But he gradually began to realize that Nazi Germany would likely never develop the atomic bomb, destroying his initial argument for working on it. He also came to realize that the bomb continued to be under active development due to reasons that he felt were unethical. In conversation, General Leslie Groves mentioned that the real purpose of the bomb was to subdue the USSR. Rotblat was shocked to hear this, especially given that the Soviet Union was at the time an ally in the war effort. In 1944, it became apparent that Germany would not develop the atomic bomb. As a result, Rotblat asked for permission to leave the project, and was granted it.

Afterwards, Rotblat regretted his role in developing nuclear weapons. He believed that the logic of nuclear deterrence was flawed, since he thought that if Hitler had possessed an atomic bomb, then Hitler’s last order would have been to use it against London regardless of the consequences. Rotblat decided to do whatever he could to prevent the future use and deployment of nuclear weapons, and proposed a worldwide moratorium on such research until humanity was wise enough to use it without risks. He decided to repurpose his career into something more useful for humanity, and began studying and teaching the application of nuclear physics into medicine, becoming a professor at the Medical College of St Bartholomew’s Hospital in London.

Rotblat worked together with Bertrand Russell to limit the spread of nuclear weapons, and the two collaborated with a number of other scientists to issue the Russell-Einstein Manifesto in 1955, calling the governments of the world to take action to prevent nuclear weapons from doing more damage. The manifesto led to the establishment of the Pugwash Conferences, in which nuclear scientists from both the West and the East met each other. By facilitating dialogue between the two sides of the Cold War, these conferences helped lead to several arms control agreements, such as the Partial Test Ban Treaty of 1963 and the Non-Proliferation Treaty of 1968. In 1995, Rotblat and the Pugwash Conferences were awarded the Nobel Peace Prize “for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms”.

The development of nuclear weapons also affected Norbert Wiener (1894-1964), professor of mathematics at the Massachusetts Institute of Technology and the originator of the field of cybernetics. After the Hiroshima bombing, a researcher working for a major aircraft corporation requested a copy of an earlier paper of Wiener’s. Wiener refused to provide it, and sent Atlantic Monthly a copy of his response to the researcher, in which he declared his refusal to share his research with anyone who would use it for military purposes.

In the past, the community of scholars has made it a custom to furnish scientific information to any person seriously seeking it. However, we must face these facts: The policy of the government itself during and after the war, say in the bombing of Hiroshima and Nagasaki, has made it clear that to provide scientific information is not a necessarily innocent act, and may entail the gravest consequences. One therefore cannot escape reconsidering the established custom of the scientist to give information to every person who may inquire of him. The interchange of ideas, one of the great traditions of science, must of course receive certain limitations when the scientist becomes an arbiter of life and death. [...]


The experience of the scientists who have worked on the atomic bomb has indicated that in any investigation of this kind the scientist ends by putting unlimited powers in the hands of the people whom he is least inclined to trust with their use. It is perfectly clear also that to disseminate information about a weapon in the present state of our civilization is to make it practically certain that that weapon will be used. [...]


If therefore I do not desire to participate in the bombing or poisoning of defenseless peoples-and I most certainly do not-I must take a serious responsibility as to those to whom I disclose my scientific ideas. Since it is obvious that with sufficient effort you can obtain my material, even though it is out of print, I can only protest pro forma in refusing to give you any information concerning my past work. However, I rejoice at the fact that my material is not readily available, inasmuch as it gives me the opportunity to raise this serious moral issue. I do not expect to publish any future work of mine which may do damage in the hands of irresponsible militarists.


I am taking the liberty of calling this letter to the attention of other people in scientific work. I believe it is only proper that they should know of it in order to make their own independent decisions, if similar situations should confront them.


Recombinant DNA

For a large part of history, scientists’ largest ethical concerns came from direct military applications of their inventions. While any invention could lead to unintended societal or environmental consequences, for the most part researchers who worked on peaceful technologies didn’t need to be too concerned with their work being dangerous by itself. But as biological and medical research obtained the capability to modify genes and bacteria, it would open up the possibility of unintentionally creating dangerous infectious diseases. In theory, these could be even more dangerous than nuclear weapons - an a-bomb dropped on a city might destroy most of that city, but a single bacteria could give rise to an epidemic infecting people all around the world.

Recombinant DNA techniques involve taking DNA from one source and then introducing it to another kind of organism, causing the new genes to express themselves in the target organism. One of the pioneers of this technique was Paul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium. This eventually led to two conferences in Asilomar, with 140 experts participating in the later 1975 one to decide upon guidelines for recombinant DNA research.

Berg and Pollack were far from the only scientists to call attention to the safety concerns of recombinant DNA. Several other scientists contributed, asking for more safety and voicing concern about a technology that could bring harm if misused.

Among them, the molecular biologist Maxine Singer (1931-) chaired the 1973 Gordon Conference on Nucleic Acids, in which some of the dangers of the technique were discussed. After the conference, she and several other similarly concerned scientists authored a letter to the President of the National Academy of Science and the President of the Institutes of Health. The letter suggested that a study committee be established to study the risks behind the new recombinant DNA technology, and propose specific actions or guidelines if necessary. She also helped organize the Asilomar Conference in 1975.

Informatics

But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.

-- Bill Joy, Why the Future Doesn’t Need Us.


Finally, we come to the topic of information technology and artificial intelligence. As AI systems grow increasingly autonomous, they might become the ultimate example of a technology that seems initially innocuous but ends up capable of doing great damage. Especially if they were to become capable of rapid self-improvement, they could lead to humanity going extinct.

In addition to refusing to help military research, Norbert Wiener was also concerned about the effects of automation. In 1949, General Electric wanted him to advise its managers on automaton matters and to teach automation methods to its engineers. Wiener refused these requests, believing that they would further a development which would lead to human workers becoming unemployed and replaced by machines. He thus expanded his boycott of the military to also be a boycott of corporations that he thought acted unethically.

Wiener was also concerned about the risks of autonomous AI. In 1960, Science published his paper "Some Moral and Technical Consequences of Automation", in which he spoke at length about the dangers of machine intelligence. He warned that machines might act far too fast for humans to correct their mistakes, and that like genies in stories, they could fulfill the letter of our requests without caring about their spirit. He also discussed such worries elsewhere.

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.


Such worries would continue to bother other computer scientists as well, many decades after Wiener’s death. Bill Joy (1954-) is known for having played a major role in the development of BSD Unix, having authored the vi text editor, and being the co-founder of Sun Microsystems. He became concerned about the effects of AI in 1998, when he met Ray Kurzweil at a conference where they were both speakers. Kurzweil gave Joy a preprint of his then-upcoming book, The Age of Spiritual Machines, and Joy found himself concerned over its discussion about the risks of AI. Reading Hans Moravec’s book Robot: Mere Machine to Transcendent Mind exacerbated Joy’s worries, as did several other books which he found around the same time. He began to wonder whether all of his work in the field of information technology and computing had been preparing the way for a world where machines would replace humans.

In 2000, Joy wrote a widely-read article titled Why the Future Doesn’t Need Us for Wired, talking about the dangers of AI as well as genetic engineering and nanotechnology. In the article, he called to limit the development of technologies which he felt were too dangerous. Since then, he has continued to be active in promoting responsible technology research. In 2005, an op-ed co-authored by Joy and Ray Kurzweil was published in the New York Times, arguing that the decision to publish the genome of the 1918 influenza virus on the Internet had been a mistake.

Joy also attempted to write a book on the topic, but then became convinced that he could achieve more by working on science and technology investment. In 2005, he joined the venture capital firm Kleiner Perkins Caufield & Byers as a partner, and he has been focused on investments in green technology.

Conclusion

Technology's potential for destruction will only continue to grow, but many of the social norms of science were established under the assumption that scientists don’t need to worry much about how the results of their work are used. Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.

Sources used

This article was written based on research done by Vincent Fagot. The sources listed below are in addition to any that are already linked from the text.

Leonardo da Vinci:


John Napier:


Robert Boyle:


Clara Immerwahr:


Lewis Fry Richardson:


Arthur Galston:


Leó Szilárd:


Joseph Rotblat:


Norbert Wiener:


Paul Berg, Maxine Singer, Robert Pollack:

Comments (148)

Comment author: CronoDAS 09 February 2013 07:22:13AM *  28 points [-]

Do (incremental) advances in military technology actually change the number of people who die in wars? They might change which people die, or how rapidly, but it seems to me that groups of people who are determined to fight each other are going to do it regardless of what the "best" weapons currently available happen to be. The Mongols wreaked havoc on a scale surpassing World War I with only 13th century technology, and the Rwandan genocide was mostly carried out with machetes. World War I brought about a horror of poison gas, but bullets and explosions don't make people any less dead than poison gas does.

(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)

Comment author: CCC 09 February 2013 07:28:51PM 10 points [-]

That is very much the right question to ask. How can we best find the answer?

Perhaps a timeline of major wars, together with the casualty figures (both as raw numbers, and as a percentage of estimated combatants) would provide that answer.

Hmmm... of the top ten wars by death toll, according to a Wikipedia list self-described as incomplete, the range of deaths-per-war ranged from 8-12 million (no. 10) to 60-78 million (no. 1, WWII). This is about an eightfold difference. The second war on the list is the 13th-century Mongols, and the earliest on the list is the Warring States Era, in China, in around 400 B.C. (10 million, estimated, 9th on the list).

Glancing over the data, I notice that most of the wars in that list are either world-spanning, or took place in China. This, I imagine, is most likely because China has a large population; thus, there are more people to get involved in, and killed in, a war. A list rearranged by percentage of involved soldiers killed might show a different trend.

I also notice that there is a very wide range of dates; but the century with the most entries in that top-ten list is the twentieth century. That may be influenced by the fact that there were more people around in the 20th century, and also by the scale of some of the conflicts (WWI and WWII, for example).

I'm not sure whether the data supports the hypothesis or not, though. Given the wide range of dates, I'm inclined to think that you may be right; that advances in war change the manner of death more than the number of deaths.

Comment author: V_V 09 February 2013 03:29:03PM 9 points [-]

(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)

Bullets and explosions don't necesarily kill.

Comment author: ewbrownv 12 February 2013 10:59:33PM *  7 points [-]

Good insight.

No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology.

As for the casualty rate of soldiers, that tends to jump up whenever a new type of weapon is introduced and then fall again as tactics change to deal with it. In the long run the dominant factor is again a matter of ideology - an army that tries to minimize casualties can generally do so, while one that sees soldiers as expendable will get them killed in huge numbers regardless of technology.

(BTW, WWI gases are nothing unusual in the crippling injury department - cannons, guns, explosives and edged weapons all have a tendency to litter the battlefield with crippled victims as well. What changed in the 20th century was that better medical meant a larger fraction of crippled soldiers to survive their injuries to return to civilian life.)

Comment author: Troshen 25 February 2013 10:39:27PM *  0 points [-]

"So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology."

This is actually one of the major purposes that Christians have had in doing missionary work - to spread tolerance and reduce violence. I assume it's happened in other religions too. For example, the rules of chivalry in the middle ages were an attempt to moderate the violence and abuses of the warriors.

Comment author: DanArmak 09 February 2013 12:26:12PM 3 points [-]

Some actual or hypothetical advances in military technology allow very widespread, imprecise destruction. Such destruction could kill big segments of the enemy state's civilian population, or of a population in which a guerrila army is embedded, as a side effect of killing soldiers.

For instance sufficiently powerful or numerous bombs can destroy large cities. Pathogens can kill or sicken an entire population (with the attacker distributing a vaccine or cure among their own population only). Damage to infrastructure can kill those who depend on it.

Comment author: ikrase 12 February 2013 03:24:32PM 0 points [-]

Notably, the two World Wars introduced the mass use of mechanized units and heavy weapons leading to a huge amount of infrastructure damage.

Comment author: DanArmak 12 February 2013 07:39:22PM *  2 points [-]

On the other hand, a century or two previously little infrastructure existed outside cities. Railways, electricity lines and power plants, car-quality roads, oil and gas pipelines, even most roads or trans-city water and sewage systems are modern inventions.

Comment author: Nornagest 09 February 2013 08:14:07AM 3 points [-]

My armchair impression is that advances in military technology can lead to higher casualty rates when tactics haven't caught up, but that once they do the death toll regresses to the mean pretty quick. Two examples: Minié balls greatly increased the accuracy and effective range of quick-loading small arms (rifling had been around for a while, but earlier muzzle-loading rifles took much longer to load), essentially rendering Napoleonic line tactics obsolete, but it took decades and two major wars (the Crimean and the American Civil War) before the lesson fully sank in. A century later, large-scale strategic bombing of civilian targets contributed to much of WWII's death toll, without bringing about the rapid capitulations it had been intended to produce.

Comment author: CronoDAS 09 February 2013 08:42:25AM 7 points [-]
Comment author: ygert 12 February 2013 05:17:37PM 2 points [-]

Well, one could argue that the biggest advance in military technology (nuclear weapons) vastly decreased the number of deaths in wars were it was involved. That is, far fewer people died from the Cold War then from World War II. So to that extent, the military technology actually changed the number of deaths down.

Comment author: ikrase 12 February 2013 03:17:35PM 0 points [-]

The most notable advances that I can think of off hand are machine guns, nuclear weapons, and the whole mass mechanized, aerial, and heavy weapons thing.

Comment author: paper-machine 08 February 2013 12:43:52PM 19 points [-]

A good article, but one thing that sticks out of me is the overall ineffectiveness of these scientists at preventing the actual use of their technology. Only the recombinant DNA experiment was stopped before actually being carried out.

Comment author: PaulS 09 February 2013 08:00:36AM 14 points [-]

This may be partly because technologies that were used are more conspicuous. We would know if Napier designed a better cannon, but we don't know how much he delayed the development of artillery by concealing his results.

Comment author: Qiaochu_Yuan 09 February 2013 08:09:56AM *  23 points [-]

Right, there's a survivorship bias. You're not going to hear about scientists who successfully prevented anyone from learning about their terrible discoveries (because to be really successful they'd also need to prevent anyone from learning that they'd prevented anyone from learning about their terrible discoveries).

Comment author: AntonioAdan 14 February 2013 09:48:01PM 2 points [-]

Once they let the cat out of the bag this is true. Da Vinci understood how to keep a secret.

Comment author: Manfred 08 February 2013 07:14:19PM *  17 points [-]

My nitpick is the vague and spooky description of Paul Berg's research. The surrounding tone is great, but this little bit ends up sounding bad.

Current:

Paul Berg (1926-), who carried out part of an experiment (like what?) which would, if completed, have created a potentially carcinogenic (vague) strain of a common gut bacteria (just say E. coli) which could have spread to human beings (Say what relevant people at the time thought, "could have" implies false danger). Due to the concerns of other scientists, he put the final part of the experiment on hold, and called for more attention to the risks of such research as well as a temporary moratorium.

Preferred:

Paul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium.

Comment author: Kaj_Sotala 09 February 2013 10:48:38AM 7 points [-]

Thanks, that's indeed better. I've replaced it with your version. (The original was vague probably because there were several conflicting accounts of what exactly happened, with e.g. different sources putting the time of the experiments to 1971, 1973 and 1974, and then I got kinda frustrated with the thing and did the write-up pretty vaguely.)

Comment author: Manfred 09 February 2013 04:02:47PM 4 points [-]

Yeah, I saw the same thing! Srsly, what is wrong with history? :D

Comment author: boni_bo 08 February 2013 09:28:55AM 20 points [-]

In 1948 Norbert Wiener, in the book Cybernetics: Or the Control and Communication in the Animal and the Machine, said: "Prefrontal lobotomy... has recently been having a certain vogue, probably not unconnected with the fact that it makes the custodial care of many patients easier. Let me remark in passing that killing them makes their custodial care still easier."

Comment author: paper-machine 08 February 2013 12:47:24PM 1 point [-]

Wiener had a well-calibrated moral compass, but still felt the need to address the religious aspects of machine learning.

Comment author: Jonii 13 February 2013 11:53:52PM 6 points [-]

My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.

Comment author: Qiaochu_Yuan 09 February 2013 12:38:08AM 6 points [-]

Thanks for writing this! I like being able to share LW material with my friends that doesn't trigger philosophical landmines.

Comment author: fela 09 February 2013 06:47:15PM 15 points [-]

Jared Diamond, in Guns Germs and Steel, argues that when the time is ripe scientific discoveries are made quite regardless of who makes them, give or take a few decades. Most discoveries are incremental, and many are made by multiple people simultaneously. So wouldn't a discovery that isn't published be just made elsewhere in a few years time, possibly by someone without many ethical concerns?

Comment author: adam_strandberg 10 February 2013 06:15:10PM 14 points [-]

Even a few years of delay can make a big difference if you are in the middle of a major war. If Galston hadn't published his results and they weren't found until a decade or two later, the US probably wouldn't have used Agent Orange in Vietnam. Similarly with chlorine gas in WWI, atomic bombs in WWII, etc. Granted, delaying the invention doesn't necessarily make the overall outcome better. If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.

Comment author: Desrtopa 10 February 2013 08:05:50PM 4 points [-]

For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.

When wars are fought until a decisive victory, a huge technological edge may serve to decrease the death toll, as the side at a disadvantage will be more easily persuaded to give up.

Comment author: [deleted] 11 February 2013 02:29:40AM 5 points [-]

For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.

This is commonly taught in US schools, but you should be aware that the claim has some serious flaws: http://en.wikipedia.org/wiki/Debate_over_the_atomic_bombings_of_Hiroshima_and_Nagasaki#Militarily_unnecessary

Comment author: Desrtopa 11 February 2013 05:18:18AM 4 points [-]

Gwern already linked to the same page previously. I've updated on the information, however, in my time at school I also did a research project on the atom bombing, and the sources I read for the project (which are not online, at least as far as I know,) cited Japanese military officials who were of the opinion that their country would have continued to resist, even to the point of a land invasion, and that the bombings were instrumental in changing that.

There are certainly good reasons to suspect that Japan might have surrendered soon under the same terms even without the dropping of the bombs, but it's also not as if there is a dearth of evidence suggesting that the bombings were a significant factor.

Comment author: gwern 10 February 2013 11:15:31PM *  3 points [-]

For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.

You know this interpretation is massively debated and criticized due to the Russian declaration of war and internal Japanese deliberations: http://en.wikipedia.org/wiki/Debate_over_the_atomic_bombings_of_Hiroshima_and_Nagasaki#Militarily_unnecessary http://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_and_Nagasaki#Surrender_of_Japan_and_subsequent_occupation

Comment author: Desrtopa 10 February 2013 11:28:29PM 2 points [-]

It's true that Japan was already willing to surrender, and perhaps this should have been a sufficient goal for the U.S. forces, but there was still a great degree of resistance to the prospect of unconditional surrender. For better or for worse, the U.S. was unsatisfied with the terms of surrender the Japanese were willing to accept prior to the Hiroshima and Nagasaki attacks, and were planning to pursue further measures until they achieved unconditional surrender.

Even if America did not resort to land invasion, months more of firebombing would most likely have resulted in a greater number of fatalities than the use of the atom bombs.

Comment author: gwern 11 February 2013 12:11:15AM *  3 points [-]

The terms are irrelevant, because the US did not get an unconditional surrender in your all-embracing sense. It got a capitulation with the understanding that the Emperor was not threatened (which was indeed subsequently the case), which makes sense once you understand that the 'unconditional surrender' in the Potsdam Declaration was only about the military forces:

"We call upon the government of Japan to proclaim now the unconditional surrender of all Japanese armed forces, and to provide proper and adequate assurances of their good faith in such action. The alternative for Japan is prompt and utter destruction."

The question is why the Japanese government abandoned its previous insistence on a general admission of defeat with 4 conditions and settled for just 1 condition which was acceptable to the US since it was not a military condition. And the reason for the dropping seems to have in large part been the sudden shock of negotiations with Russia failing and it dropping neutrality and starting its invasion. Even despite its almost immediate surrender to the US, Japan still lost Sakhalin.

(I'd note that we might expect claims about the necessity of the bombings to be overblown for at least 2 reasons: first and most obviously, it is important so as to justify the murder of hundreds of thousands of civilians in those bombings and other ongoing campaigns despite US government awareness of Japan's ongoing surrender overtures and that Russia would switch its attention to the Japanese front soon with what were probably at the time predictable consequences, and secondly, it is a useful claim in minimizing credit for the Russian contribution to WWII, a phenomenon already acknowledged about most US treatments of the European theater's eastern front.)

Comment author: CCC 11 February 2013 07:24:45AM *  1 point [-]

When wars are fought until a decisive victory, a huge technological edge may serve to decrease the death toll, as the side at a disadvantage will be more easily persuaded to give up.

That may indeed be the case.

Comment author: army1987 11 February 2013 05:16:19PM 1 point [-]

If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.

Huh. I had never thought about that from that angle.

Comment author: lukeprog 10 February 2013 04:10:19AM 3 points [-]

Maybe.

I'm not an expert on the history of science, but it seems to me like:

  • Lots of psychology could have been done decades or maybe a century earlier, but nobody bothered until the mid-20th century.
  • If Einstein hadn't figured out General Relativity, it might have been another 15-25 years before somebody else figured it out.
  • On the other hand, things like computers and Bayes nets and the structure of DNA wouldn't have taken much longer to discover if their actual discoverers hadn't been on the case for whatever reason.
Comment author: Eliezer_Yudkowsky 10 February 2013 04:44:13PM 5 points [-]

What a good thing for all of us that Leo Szilard did not make this mistake.

Comment author: ricketson 09 February 2013 08:26:58PM 2 points [-]

Especially in the modern environment with many thousands of scientists, there won't be much delay caused by a few scientists witholding their results. The greatest risk is that the discovery is made by someone who will keep it secret in order to increase their own power.

There is also a risk that keeping secrets will breed mistrust, even if the secret is kept without evil intent.

Comment author: CCC 08 February 2013 01:21:54PM 13 points [-]

Asimov once wrote a short story - "A Feeling of Power" - on the subject of misusing technology for evil ends. In the story, set in the far future, a man rediscovers basic mathematics (without a computer) - possibly the most innocent of possible advances - and, once he sees what horrors this invention will lead to, kills himself.

A lot of technology (I'd even say most technology) is two-pronged - it can be used for good or evil. Nuclear power or nuclear bombs. Creating disease or destroying disease. The products of technology are not immune to this; the Archimedean screw can irrigate a field or flood it. Dynamite can be used in excavation or as a weapon.

So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course). This is mainly because our current society rests on the public research and discoveries of countless people throughout history; I would prefer that future societies should be even better than our current society, and the best way that I see to ensure that is by giving future societies a greater base of knowledge to draw from.

Comment author: Eugine_Nier 09 February 2013 12:34:26AM 7 points [-]

So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course).

The idea of destroying your own research to stop progress seems to assume that no one else can do the same experiment.

Comment author: CCC 09 February 2013 07:00:58PM 6 points [-]

It could merely be that the scientist knows that his research in particular is being watched by men who will immediately misuse it when they can; allowing some random person to re-run the experiment is not a problem, assuming that the random person is not being watched in particular.

It could be that the experimental work is complex enough, and the expected returns unexpected enough, that the scientist has good reason to think that it will be a decade or more until the experiment is re-done - by which point one may hope that the political/social landscape may have changed enough to put less emphasis on evil uses (e.g. a major war may have ended in the interim). (Note that, in the case of one particular theory - continental drift - it was suggested under that name as far back as 1912 - and the idea of the continents moving was proposed as early as 1596 - but was still not accepted in the 1940s).

These assumptions are both a good deal weaker than the one you suggest, but I don't think they're unreasonable.

Comment author: Kawoomba 08 February 2013 03:22:25PM 0 points [-]

So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course).

Good luck with destroying your research and getting away with it. Unless you bring your own particle accelerator (BYOPA), your own lab, are not beholden to corporate interests for your livelihood, not subject to frequent progress updates on how you spend your grant money, (etc.) Oh, and hopefully you persuade your research group to go along with you, so that when you face legal charges for breaking your contract, at least it wasn't for nothing.

Comment author: Manfred 08 February 2013 06:25:55PM *  1 point [-]

Charitably, "destroying your research" should refer to nullifying the effort that you put into advancing a field, not actually (and merely) throwing away your samples in an obvious manner.

Comment author: Kawoomba 08 February 2013 06:28:44PM 1 point [-]

How would you go about doing that?

(Also, my previous comment agreed with its parent, and was just pointing out the practical infeasibility of following through with such a course of action.)

Comment author: CCC 08 February 2013 08:45:53PM 9 points [-]

There are several ways to nullify, or even reverse progress:

  • Falsify some hard-to-duplicate results in a way that calls previous results into doubt
  • Subtly sabotage one or more experiments that will be witnessed by others
  • Enthusiastically pursue some different avenue of research, persuading others to follow you
  • Leave research entirely, taking up a post as an undergraduate physics lecturer at some handy university

There would have to be extremely good reason to try one of the top two; since they involve not only removing results, but actually poison the well for future researchers.

Comment author: roystgnr 09 February 2013 07:53:40AM 6 points [-]

Casting doubt on a research track is probably easier said than done, no? To use a ridiculous hypothetical example: "Cold fusion" has been the punchline of jokes to 99.9% of scientists ever since the 1989 experiment garnered a ton of publicity without an ounce of replicability, yet Wikipedia suggests that the remaining 0.1% decades later still includes a few serious research teams and a few million dollars of funding. If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn't fully paid off.

The fact that I had to resort to a ridiculous hypothetical example there shows an unavoidable problem with this article, by the way: no history of successful ethical concern about scientific publication can exist, since almost by definition any success won't make it into history. All we get to hear about is unconcern and failed concern.

Comment author: CCC 09 February 2013 07:03:24PM 2 points [-]

If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn't fully paid off.

Of course, no-one has found any dangerous results; so if that's what they were trying to hide, perhaps by leaving a false trail, then they've succeeded admirably, sending future researchers up the wrong path.

Comment author: roystgnr 10 February 2013 06:59:59AM 0 points [-]

In real life, I'm pretty sure that nobody has found any dangerous results because there aren't any dangerous results to find. This doesn't mean that creating scandals successfully reduces the amount of scientific interest in a topic, it just means that in this case there wasn't anything to be interested in.

Comment author: oooo 09 July 2013 12:31:49AM *  0 points [-]

Enthusiastically pursue some different avenue of research, persuading others to follow you

I am reading Kaj Sotala's latest paper "Responses to Catastrophic AGI Risk: A Survey" and I was struck by this thread regarding ethically concerned scientists. MIRI is following this option by enthusiastically pursuing FAI (slightly different avenue of research) and trying to persuade and convince others to do the same.

EDIT: My apologies -- I removed the second part of my comment proactively because it dealt with hypothetical violence of radical ethically motivated scientists.

Comment author: sanyasi 09 February 2013 10:57:51AM 0 points [-]

It's debatable whether Heisenberg did the former, causing the mistaken experiment results that led the Nazi atomic program to conclude that a bomb wasn't viable. See http://en.wikipedia.org/wiki/Copenhagen_(play) for scientific entertainment (there's a good BBC movie about this starring Daniel Craig as Werner Heisenberg)

Comment author: Manfred 08 February 2013 08:46:11PM *  1 point [-]

Suppose we're in a bad-case modern scenario, where there's been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples' hands as possible, we can't just be like Paul Berg and write an article asking for a moratorium on research. Let's say it's self-replicating nanotechnology or something.

One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you've already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.

In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.

Comment author: Kawoomba 08 February 2013 09:26:10PM *  0 points [-]

Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there's no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.

Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it's unlikely you'll be the first to notice the danger, and in which case you'll probably be self selected for being enthusiastic about what you do.

Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it's not like you're strictly against any progress that could be dual-used for self-replicators.

What I'm getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn't seem Chinese-Room like concocted.

Maybe it's selection bias from the scientific news cycle, but unless there is a large "dark figure" of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there's little stopping the (hopefully hypothetical) doomsday clock.

Comment author: CCC 09 February 2013 07:07:30PM 1 point [-]

I agree; it seems a very contrived scenario. Though, should such a contrived scenario occur, then it seems to me that legal liabilities, pariah status, and damages will seem negiligible problems next to the alternative.

Comment author: Luke_A_Somers 08 February 2013 02:51:19PM 7 points [-]

Stylistic note - you use the Leonardo da Vinci submarine example twice in three paragraphs without acknowledgement that you just used it.

Comment author: Izeinwinter 12 February 2013 02:08:26PM 6 points [-]

You are missing a major problem. Not "secrecy will kill progress" That is, in this context, a lesser problem. The major problem is that scientific secrecy would eventually kill the planet.

In a context of ongoing research and use of any discipline, Dangerous techniques must be published, or they will be duplicated over and over again, until they cause major damage. If the toxicity of Dimethylmercury was a secret, chemical laboratories and entire college campuses dying slowly, horrifically and painfully would be regular occurrences. No scientific work is done without a context, and so all discoveries will happen again. If you do not flag any landmines you spot , someone not-quite-as-sharp will eventually reach the same territory and step on them. If you find a technique you consider a threat to the world, it is now your problem to deal with, and secrecy is never going to be a sufficient response, but is instead merely an abdication of moral responsibility onto the next person to get there.

Comment author: ikrase 12 February 2013 03:03:01PM 1 point [-]

My impression of this post was not that it made a focused argument in favor of secrecy specifically.

Comment author: ewbrownv 12 February 2013 10:49:22PM 1 point [-]

It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.

Comment author: Kaj_Sotala 13 February 2013 06:08:48AM 0 points [-]
Comment author: ikrase 14 February 2013 05:58:47AM 0 points [-]

Also, I said focused argument.

Comment author: Troshen 25 February 2013 10:16:12PM *  0 points [-]

This is an extremely important point. Historically it might take a long time, if ever, for someone else to come to a similar discovery that you just made. For example, Leonardo's submarines. But that was when only a tiny fraction of humanity devoted time to experiments. His decision to hide his invention kicked the can of secret attacks by submarines many years down the road and may have saved many lives. (I'm not so sure - leaders who wanted wars I'm sure found other secret plots and strategems, but at least he exercised his agency to not be the father of them)

But things are different now. You can be practically guaranteed that if you are working on something, someone else in the world is working on it too, or will be soon. Being at a certain place and time in your industry puts you in a position to see the possible next steps, and you aren't alone.

If you see something dangerous that others don't, the best bet is to talk about it. More minds thinking and talking about it from multiple different perspectives have the best chance to solve it.

Communication is a great, helpful key to survival. I think we had it when the U.S. and the Soviets didn't annihilate the world when the U.S. policy was Mutual Assured Destruction. And I think we didn't have it in the U.S. Civil War and in WWI, when combat technology had raced ahead of the knowledge and training of the generals of those wars, and that led to shocking massacres unintended by either side.

An example other than unfriendly AI is asteroid mining and serious space travel in general. Right now we have the dangers from asteroids. But the ability to controllably move mass in orbit would inevitably become one of the most powerful weapons ever seen. Unless people make a conscious choice not to use it for that. Although I've wanted to write fiction stories about it and work on it, I've actually hesitated for the simple fact that I think it's inevitable that it will become a weapon.

This post makes me confident. The action most likely to lead to humanity's growth and survival is to talk about it openly. First because we're already vulnerable to asteroids and can't do anything about it. And second because talking about it raises awareness of the problem so that more people can focus on solving it.

I really think that avoiding nuclear war is an example. When I was a teenager everyone just assumed we'd all die in a nuclear war someday. Eventually through a deliberate war or an accident or a skynet-style-terminator incident civilization as a whole would be gone. And eventually that fear just evaporated. I think it's because we as a culture kept talking about it so much and not leaving it up to only a few monarchic leaders.

So I'm changing my outlook and plans based on this post and this comment. I plan to talk about and promote asteroid mining and write short stories about terrorists dropping asteroids on cities. To talk about it it is better in the long run.

Comment author: Izeinwinter 25 February 2013 10:55:33PM *  0 points [-]

, I have given some thought to this specific problem - not just asteroids, but the fact that any spaceship is potentially a weapon, and as working conditions go, extended isolation does not have the best of records on the mental stability front.

Likely solutions: Full automation and one-time-pad locked command and control - This renders it a weapon as well controlled as nuclear arsenals, except with longer lead times on any strike, so even safer from a MAD perspective. (... and no fully private actor ever gets to run them. ) Or if full automation is not workable, a good deal of effort expended on maintaining crew sanity - Psyc/political officers - called something nice, fluffy, and utterly anodyne to make people forget just how much authority they have, backed up with a remote controlled self destruct. Again, one time pad com lock. It's not going to be a libertarian free for-all as industries go, more a case of "Extremely well paid, to make up for the conditions and the sword that will take your head if you crack under the pressure" Good story potential in that, though.

Comment author: Troshen 25 February 2013 11:43:36PM 0 points [-]

I think we're heading off-topic with this one, and I'd like to continue the discussion and focus it on space, not just whether to reveal or keep secrets.

So I started this thread: http://lesswrong.com/r/discussion/lw/gsv/asteroids_and_spaceships_are_kinetic_bombs_and/

Comment author: Vladimir_Nesov 25 February 2013 10:23:46PM *  0 points [-]

but at least he exercised his agency to not be the father of them

This distinction doesn't seem important.

Comment author: Kaj_Sotala 13 February 2013 06:08:17AM *  2 points [-]

Since "this post is arguing for secrecy in particular being a good thing" seems to be a common misunderstanding of the intent of the post, I deleted the mention of hiding one' work from the opening paragraph, as well as added a paragraph explicitly saying that we're not saying that any particular way of taking responsibility is necessarily the correct one.

Comment author: tadamsmar 16 January 2014 05:18:36PM *  1 point [-]

As you point out, Szilard took steps to keep his nuclear chain-reaction patent secret from the Germans. He later took steps that led the US government to start preventing the open publication of scientific papers on nuclear reactor design and other related topics. (The Germans noticed when the journals went quiet.)

Right after Hiroshima and Nagasaki, he thought the US government was putting out too much public information on the A-bomb. He even thought the Einstein-Szilard letters should remain secret. His idea at the time was the US government should reveal almost nothing and use the promise to reveal as a bargaining chip in an effort to get an international agreement for the control of nuclear weapons.

Szilard's secrecy about the neutron chain-reaction made it hard for him to get anyone to help him work on making nuclear energy practical between 1934 and 1940. So, it arguably slowed down everyone, not just the Germans.

Source is the Szilard biography "Genius in the Shadows"

Comment author: Macaulay 13 February 2013 12:17:57AM *  2 points [-]

On a related topic, Pinker has a very useful discussion of the case for and against open discussion of dangerous (non-technological) ideas. (Mindkiller warning)

Comment author: Decius 09 February 2013 11:49:06PM 5 points [-]

Better than developing ethical scientists would be a policy of developing ethical political and military leaders.

Comment author: Nebu 15 February 2013 04:13:36PM 0 points [-]

Better for whom? I'd really like my rival countries to have ethical military leaders, but maybe I prefer my own country's military leaders to be ruthless.

Comment author: CCC 16 February 2013 05:50:18AM 2 points [-]

I would prefer my own country's miilitary leaders to be ethical as well, personally. A ruthless military leader may:

  • Attempt to overthrow the government
  • Declare war on a nearby country that he thinks he can defeat
  • Subvert military supply lines in order to unethically increase his personal wealth

...all of which are behaviours I do not prefer.

Comment author: Decius 15 February 2013 11:46:19PM 1 point [-]

Do you defect in iterated prisoners' dilemma?

Comment author: Nebu 16 February 2013 03:42:32AM *  1 point [-]

No, but I'm not sure military conflicts are necessarily iterated, especially from the perspective of me, an individual civilian within a nation.

Comment author: Decius 16 February 2013 07:42:05AM 0 points [-]

But the selection of military leaders is iterated.

Comment author: Jayson_Virissimo 23 February 2013 01:35:47PM 1 point [-]

But the selection of military leaders is iterated.

Most of us are not in a position to ever select a military leader, let alone do it an indefinite number of times.

Comment author: Decius 23 February 2013 10:58:08PM 1 point [-]

Most adult US citizens are in a position to have nonzero input into the selection of the person who determines military policy.

Comment author: Nebu 19 February 2013 06:33:14PM 1 point [-]

I'm afraid I don't see the relevance.

Comment author: Decius 20 February 2013 02:47:12AM 0 points [-]

I think the payoff matrix of warfare is very analogous to the PD payoff matrix, and that the previous (and even current) military leaders are available to all serious players of the game. Also, anticipate that others might make irrational decisions, like responding to a WMD attack with a WMD reprisal even if it doesn't benefit them; they might also make rational decisions, like publicly and credibly precommitting to a WMD reprisal in the even of a WMD attack.

Comment author: Nebu 06 March 2013 02:57:33PM 0 points [-]

I'm still not following you.

So first of all, you'll need to convince me that the payoff matrix for an individual civilian within a nation deciding who their military leader should be is similar to one of the prisoners in PD. In particular, we'll need to look at what "cooperate" and "defect" even mean for the individual citizen. E.g. does "cooperate" mean "elect an ethical military leader"?

Second, asuming you do convince me that the payoff matrices are similar, you'll have to clarify whether you think warfare is iterated for an individual civilian, especially when the "other" nation defects. I suspect if my leader is ethical, and their leader is not, then I will be dead, and hence no iteration for me.

Thirdly, you may wish to clarify whether all the sentences after your first are intended to be new assertions, or if they are supposed to be supporting arguments for the first sentence.

Comment author: Decius 07 March 2013 05:56:36AM 0 points [-]

Vastly simplified:

Survival is worth three points, destroying the opposing ideology is worth two points, and having at least one survivor is worth twenty points.

If nobody uses WMDs, everyone gets 23 points. If one side uses WMDs, they survive and destroy their idealogical opponent for 25 points to the opposing 20. If both sides use WMDs, both score 2 for destroying the opponent.

Given that conflicts will happen, a leader who refuses to initiate use of WMDs while convincing the opponent that he will retaliate with them is most likely to result in the dual-cooperate outcome. Therefore the optimum choice for the organism which selects the military leaders is to select leaders who are crazy enough to nuke them back, but not crazy enough to launch first.

If you share the relative ranking above (not-extinction>>surviving>wiping out others), then your personal maximum comes from causing such a leader to be elected (not counting unrelated effects on e.g. domestic policy). The cheapest way of influencing that is by voting for such a leader.

Comment author: Nebu 14 April 2013 03:10:53PM *  0 points [-]

What's the difference between "Survival" and "having at least one survivor"?

The way I see it:

  • If I'm dead, 0 points.
  • If I'm alive, but my city got nuked, so it's like a nuclear wasteland, 1 point.
  • If I'm alive, and living via normal north american standards, 2 points.

We're assuming a conflict is about to happen, I guess, or else the hypothetical scenario is boring and there are no important choices for me to make.

The question is not "Do I elect a crazy leader or a non-crazy leader?", but rather, "Do I elect a leader that's believes 'all's fair in love and war?' or a leader that believes in 'always keep your word and die with honor'?"

I.e. if you think "ethical vs unethical" means "will retaliate-but-not-initiate vs will not retaliate-but-not-intiiate", then it's no wonder why we're having communication problems.

Comment author: BillyOblivion 20 February 2013 07:04:08AM 0 points [-]

Is ruthlessness necessarily unethical in a military leader?

Sometimes compassion is a sharp sword.

Comment author: Bugmaster 20 February 2013 07:10:14AM 0 points [-]

Agreed; as Sun Tzu points out on several occasions, fighting should usually be considered a measure of last resort.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: Vladimir_Nesov 10 February 2013 02:21:21AM 4 points [-]

the best way to achieve your goals is to publish and disseminate your research as much as possible

This is an important question, and simply asserting that the answer to it is one way or the other is not helpful for understanding the question better.

Comment author: V_V 10 February 2013 09:16:41AM 1 point [-]

Fair enough. I think I provided arguments against scientific secrecy. I'd glad to hear counter-arguments.

Comment author: ygert 09 February 2013 04:54:50PM *  4 points [-]

I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn't though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn't upvote it, as I have no wish to condone any sort of impoliteness.

Comment author: V_V 09 February 2013 05:11:53PM *  0 points [-]

Is your complaint about the tone of the last paragraphs, or about the content?

In case you are wondering, yes, I have a low opinion of the SI. I think it's unlikely that they are competent to achieve what they claim they want to achieve.

But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature.
While I don't think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.

So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.

Comment author: asparisi 10 February 2013 09:48:35PM 2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

Comment author: V_V 11 February 2013 12:20:05AM *  2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn't have wanted to let the Nazis know what you were doing, and with good reason.
But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.

If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret.

How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?

psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing

Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

That statement seems contrived, I suppose that by "can map onto the state of the world" you mean "is logically consistent".
Of course, I didn't make that logically inconsistent claim. My claim is that "X probably won't work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken".

Comment author: Troshen 25 February 2013 10:49:57PM 0 points [-]

This is a good discussion of the trade-offs that should be considered when deciding to reveal or keep secret new, dangerous technologies.

Comment author: ricketson 09 February 2013 08:38:45PM *  2 points [-]

Good points, but it was inappropriate to question the author's motives and the attacks on the SI were off-topic.

Comment author: V_V 10 February 2013 01:30:10AM 0 points [-]

I didn't claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.

I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.

the attacks on the SI were off-topic.

Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn't an attack, it was a (critical) suggestion.

Comment author: Kaj_Sotala 09 February 2013 05:41:43PM 1 point [-]

That's a rather uncharitable reading.

Comment author: V_V 10 February 2013 01:39:22AM 1 point [-]

Possibly, but I try to care about being accurate, even if that means not being nice.

Do you think there are errors in my reading?

Comment author: Kaj_Sotala 10 February 2013 07:11:34AM *  5 points [-]

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.

Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.

Applied to the topic of AI, the general message does not become "keep all of your research secret!" but rather "consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc." That's what the conclusion of the article said, too: "Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work."

The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.

Comment author: V_V 10 February 2013 12:13:20PM 1 point [-]

Thanks for the clarification.

However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

Seriously, who is claiming that scientists should not take ethics into consideration while they do research?

Comment author: timtyler 11 February 2013 02:06:34AM *  0 points [-]

Seriously, who is claiming that scientists should not take ethics into consideration while they do research?

It's more that humans specialise. Scientist and moral philosopher aren't always the same person.

Comment author: whowhowho 11 February 2013 12:03:24PM 2 points [-]

OTOH, you don't get let off moral responsibility just because it isn't your job.

Comment author: timtyler 11 February 2013 11:28:06PM 1 point [-]

It's more that many of the ethical decisions - about what to study and what to do with the resulting knowledge - are taken out of your hands.

Comment author: whowhowho 12 February 2013 01:27:03AM 2 points [-]

Only they are not, because you are not forced to do a job just because you have invested in the training --however strange that may seem to Homo Economicus.

Comment author: timtyler 12 February 2013 10:52:13AM *  1 point [-]

Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don't involve putting yourself out of work.

Comment author: V_V 11 February 2013 11:34:05AM 0 points [-]

Moral philosophers hopefully aren't the only people who take ethics into account when deciding what to do.

Comment author: BerryPick6 11 February 2013 12:53:23PM 1 point [-]

Some data suggests they make roughly the same ethical choices everyone else does.

Comment author: army1987 10 February 2013 12:00:59PM *  0 points [-]
Comment author: [deleted] 12 February 2013 09:24:18PM *  3 points [-]

A scientist who shares a potentially harmful invention with the rest of the world might not necessarily lack ethical concern. If I invented the knife, I could choose to think that sharing it with others would increase the probability of random stabbings and accidental self-inflicted injury (very bad), or I could choose to focus on the fact that it would be an extremely useful tool in everyday life (very good).

Comment author: lukeprog 11 February 2013 09:47:20AM *  1 point [-]

Ron Arkin might also belong on the list.

From Robots at War: Scholars Debate the Ethical Issues:

“I was very enthralled with the thrill of discovery and the drive for research and not as much paying attention to the consequences of, ‘If we answer these questions, what’s going to happen?’” [roboticist Ronald Arkin] says. What was going to happen soon became apparent: Robotics started moving out of the labs and into the military-industrial complex, and Mr. Arkin began to worry that the systems could eventually be retooled as weaponized “killing machines fully capable of taking human life, perhaps indiscriminately.” 

Arkin went on to write one of the better works of "mainstream machine ethics".

Comment author: Maybe_a 09 February 2013 07:17:37AM 1 point [-]

Standing against unintended pandemics, atomic warfare and other extinction threatenting events have been quite good of an idea in retrospect. Those of us working of scientific advances shall indeed ponder the consequences.

But Immerwahr-Haber episode is just an unrelated tearjerker. Really, inventing process for creation of nitrogen fertilizers is so more useful than shooting oneself in the heart. Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.

Comment author: DanArmak 09 February 2013 12:28:33PM 2 points [-]

Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.

That is rather begging the question. As a result of WW1 there have been agreements in place - the Geneva Protocol - not to develop or use chemical weapons, and so fewer people have been killed by them than might have otherwise.

Comment author: Maybe_a 09 February 2013 04:59:52PM 1 point [-]

Well, it seems somewhat unfair to judge the decision on information not available for decision-maker, however, I fail to see how is that an 'implicit premise'.

I didn't think Geneva convention was that old, and, actually updating on it makes Immerwahr decision score worse, due to lower expected amount of saved lives (through lower chance of having chemical weapons used).

Hopefully, roleplaying this update made me understand that in some value systems it's worth it. Most likely, E(\Delta victims to Haber's war efforts) > 1.

Comment author: DanArmak 09 February 2013 05:59:00PM 8 points [-]

Here's what I meant by saying you were begging the question: you were assuming the outcome (few people would be killed by chemical warfare after WW1) did not depend on the protests against chemical weapons.

You said originally that protesting against chemical warfare (CW) during WW1 was not worth the sacrifice involved, because few people were killed by CW after WW1.

But the reason few people were killed is that CW was not used often. And one contributing factor to its not being used was that people had protested its use in WW1, and created the Geneva Convention.

People who protested CW achieved their goal in reducing the use of CW. So the fact CW was not used much and killed few people, is not evidence that the protest was in vain - to the contrary, it's exactly what you would expect to see if the protest was effective.

Comment deleted 23 February 2013 01:10:21PM [-]
Comment deleted 01 April 2013 07:30:46AM *  [-]
Comment deleted 01 April 2013 10:27:27AM [-]
Comment deleted 01 April 2013 02:07:01PM [-]
Comment author: lukeprog 21 April 2013 01:19:00AM 0 points [-]

was granted a patent for the atomic bomb in 1934

I think you mean "for the nuclear chain reaction."

Comment author: Kaj_Sotala 21 April 2013 07:13:24AM 0 points [-]

Thanks, fixed.

Comment author: halcyon 11 April 2013 11:09:45AM 0 points [-]

I can tell I won't like Bill Joy's article. He can do what he wants to, but I don't see how "humanity" is a good argument against a robotic future. Isn't it a bit presumptuous to assume that all humans are content to remain human, assuming they even like being human all that much?

Comment author: VCM 13 March 2013 06:51:05AM 0 points [-]

Thanks, insightful post. I find the research a bit patchy. Only on the atomic bomb there is vast literature since the 1950ies, even in popular fiction - and a couple of crucial names like Oppenheimer (vs. Teller), the Russell–Einstein Manifesto or v. Weizsäcker are absent here.

Comment author: Kaj_Sotala 15 March 2013 01:41:24PM 0 points [-]

Thanks. The Russell-Einstein manifesto is mentioned in the post?

Comment author: Bugmaster 11 February 2013 08:06:37PM 0 points [-]

Even if keeping research secret in our modern world was feasible, I don't believe it would be desirable. I would argue that humanity has benefited tremendously from chemistry, modern physics, genetics, and informatics. The problem is that knowledge is amoral. The same knowledge that allows you to build a bomb also allows you to build a nuclear reactor. The same compiler that you use to create the latest communication protocols also allows you to create destructive computer viruses. There's no way of keeping one and discarding the other; and, on the whole, we are IMO better off with computers and other such things than we are without them.

Comment author: MaoShan 09 February 2013 06:51:10AM *  0 points [-]

Why are some of your links triggering scammish popups? Is it supposed to be some sort of humor?

Comment author: MaoShan 11 February 2013 12:56:11AM 1 point [-]

It was determined to be human error on my side. Fixed.

Comment author: Kaj_Sotala 09 February 2013 07:17:27AM 1 point [-]

They are? Which ones?

Comment author: MaoShan 10 February 2013 03:06:26AM 1 point [-]

The word "pay" in paragraph 1, the word "details" in paragraph 5, and the word "money" in paragraph 7. It's possible that either my computer or the LW site has some very creative adware.

Comment author: pjeby 10 February 2013 03:21:08AM 7 points [-]

Since the words you mention don't have any links, it's got to be your computer, or a proxy between your computer and LW.

Comment author: Kawoomba 10 February 2013 06:20:37AM 13 points [-]

It's your computer.

Congratulations, it is pregnant.

Comment author: Nornagest 10 February 2013 07:46:06AM *  2 points [-]

Like fubarubfusco says below, this is probably a malware issue. I saw something similar when a disk recovery program I didn't vet thoroughly enough infected me with a searchbar package that I'll leave nameless; MalwareBytes took care of most of it for me, though I had to do a little cleanup work myself.

It should probably be mentioned that most widespread antivirus packages won't catch this sort of thing; you need something that casts a broader net.

Comment author: fubarobfusco 10 February 2013 07:16:16AM 1 point [-]

Your computer probably has a badware problem. If you are running Windows, try anti-spyware programs such as Spybot. Otherwise, check your browser proxy settings and browser extensions ....

Comment author: MaoShan 11 February 2013 12:54:24AM 2 points [-]

I think it actually may have been an add-on that was intentionally (or just carelessly) installed into Firefox by another family member. I can shut it off myself. Seriously, who would download a program that explicitly promises more popups? (facepalm)

Comment author: CCC 11 February 2013 07:30:32AM *  3 points [-]

Seriously, who would download a program that explicitly promises more popups?

Depends how it's marketed. Or whether the person downloading it knew what they were downloading. Or even that they were downloading/installing something.

Comment author: army1987 11 February 2013 07:41:55PM 0 points [-]

Seriously, who would download a program that explicitly promises more popups?

(emphasis added)

Comment author: army1987 11 February 2013 07:43:24PM 2 points [-]

(Again, I should stop replying to comments without reading their ancestors first.)

Comment author: [deleted] 10 February 2013 05:33:38AM *  0 points [-]

I didn't find them either

Comment author: poiuyt 09 February 2013 07:07:32AM 1 point [-]

I'm not seeing any popups?

Comment author: MaoShan 10 February 2013 03:09:27AM 0 points [-]

Refer to the nested comment above for the details. So nobody else here has links on those words?

Comment author: poiuyt 10 February 2013 07:17:18PM 1 point [-]

Pretty sure you've got some adware. Especially if the links are green and in a funny font.

Comment author: Qiaochu_Yuan 10 February 2013 05:46:41AM 0 points [-]

Nope. Just you, I'm afraid.