A brief history of ethically concerned scientists

68 Kaj_Sotala 09 February 2013 05:50AM

For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.

-- Norbert Wiener (1956), Moral Reflections of a Mathematician.


Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. For someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous. Throughout history, many scientists and inventors have recognized this, and taken different kinds of action to help ensure that their work will have beneficial consequences. Here are some of them.

This post is not arguing that any specific approach for taking responsibility for one's actions is the correct one. Some researchers hid their work, others refocused on other fields, still others began active campaigns to change the way their work was being used. It is up to the reader to decide which of these approaches were successful and worth emulating, and which ones were not.

Pre-industrial inventors

… I do not publish nor divulge [methods of building submarines] by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them.

-- Leonardo da Vinci


People did not always think that the benefits of freely disseminating knowledge outweighed the harms. O.T. Benfey, writing in a 1956 issue of the Bulletin of the Atomic Scientists, cites F.S. Taylor’s book on early alchemists:

Alchemy was certainly intended to be useful .... But [the alchemist] never proposes the public use of such things, the disclosing of his knowledge for the benefit of man. …. Any disclosure of the alchemical secret was felt to be profoundly wrong, and likely to bring immediate punishment from on high. The reason generally given for such secrecy was the probable abuse by wicked men of the power that the alchemical would give …. The alchemists, indeed, felt a strong moral responsibility that is not always acknowledged by the scientists of today.


With the Renaissance, science began to be viewed as public property, but many scientists remained cautious about the way in which their work might be used. Although he held the office of military engineer, Leonardo da Vinci (1452-1519) drew a distinction between offensive and defensive warfare, and emphasized the role of good defenses in protecting people’s liberty from tyrants. He described war as ‘bestialissima pazzia’ (most bestial madness), and wrote that ‘it is an infinitely atrocious thing to take away the life of a man’. One of the clearest examples of his reluctance to unleash dangerous inventions was his refusal to publish the details of his plans for submarines.

Later Renaissance thinkers continued to be concerned with the potential uses of their discoveries. John Napier (1550-1617), the inventor of logarithms, also experimented with a new form of artillery. Upon seeing its destructive power, he decided to keep its details a secret, and even spoke from his deathbed against the creation of new kinds of weapons.

But only concealing one discovery pales in comparison to the likes of Robert Boyle (1627-1691). A pioneer of physics and chemistry and possibly the most famous for describing and publishing Boyle’s law, he sought to make humanity better off, taking an interest in things such as improved agricultural methods as well as better medicine. In his studies, he also discovered knowledge and made inventions related to a variety of potentially harmful subjects, including poisons, invisible ink, counterfeit money, explosives, and kinetic weaponry. These ‘my love of Mankind has oblig’d me to conceal, even from my nearest Friends’.

continue reading »

Complexity: inherent, created, and hidden

8 Swimmer963 14 September 2011 02:33PM

Related to: inferential distance, fun theory sequence.

“The arrow of human history…points towards larger quantities of non-zero-sumness. As history progresses, human beings find themselves playing non-zero-sum games with more and more other human beings. Interdependence expands, and social complexity grows in scope and depth.” (Robert Wright, Nonzero: The Logic of Human Destiny.)

What does it mean for a human society to be more complex? Where does new information come from, and where in the system is it stored? What does it mean for everyday people to live in a simple versus a complex society?

There are certain kinds of complexity that are inherent in the environment: that existed before there were human societies at all, and would go on existing without those societies. Even the simplest human society needs to be able to adapt to these factors in order to survive. For example: climate and weather are necessary features of the planet, and humans still spend huge amounts of resources dealing with changing seasons, droughts, and the extremes of heat and cold. Certain plants grow in certain types of soil, and different animals have different migratory patterns. Even the most basic hunter-gatherer groups needed to store and pass on knowledge of these patterns. 

But even early human societies had a lot more than the minimum amount of knowledge required to live in a particular environment. Cultural complexity, in the form of traditions, conventions, rituals, and social roles, added to technological complexity, in the form of tools designed for particular purposes. Living in an agricultural society with division of labour and various different social roles required children to learn more than if they had been born to a small hunter-gatherer band. And although everyone in a village might have the same knowledge about the world, it was (probably) no longer possible for all the procedural skills taught and passed on in a given group to be mastered by a single person. (Imagine learning all the skills to be a farmer, carpenter, metalworker, weaver, baker, potter, and probably a half-dozen other things.)

This would have been the real beginning of Robert Wright’s interdependence and non-zero-sum interactions. No individual could possess all of the knowledge/complexity of their society, but every individual would benefit from its existence, at the price of a slightly longer education or apprenticeship than their counterparts in hunter-gather groups. The complexity was hidden; a person could wear a robe without knowing how to weave it, and a clay bowl without knowing how to shape it or bake it in a kiln. There was room for that knowledge in other people’s brains. The only downside, other than slightly longer investments in education, was a small increase in inferential distance between individuals.

Writing was the next step. For the first time, a significant amount of knowledge could be stored outside of anyone’s brain. Information could be passed on from one individual, the writer, to a nearly unbounded number of others, the readers. Considering the limits of human working memory, significant mathematical discoveries would have been impossible before there was a form of notation. (Imagine solving polynomial equations without pencil and paper.) And for the first time, knowledge was cumulative. An individual no longer had to spend a number of years mastering a particular, specific skill in an apprenticeship, having to laboriously pass on any new discoveries one at a time to their own apprentices. The new generation could start where the previous generation had left off. Knowledge could stay alive indefinitely, almost, in writing, without having to pass through a continuous line of minds. (Without writing, even if the ancient Greek society had possessed equivalent scientific and mathematical knowledge, it could not have later been rediscovered by any other society.) Conditions were ripe for the total sum of human knowledge to explode, and for complexity to increase rapidly.

The downside was a huge increase in inferential distance. For the first time, not only could individuals lack a particular procedural skill, they might not even know that the skill existed. They might not even benefit from the fact of its existence. The stock market contains a huge amount of knowledge and complexity, and provides non-zero-sum gains to many individuals (as well as zero-sum gains to some individuals). But to understand it requires enough education and training that most individuals can’t participate. The difference between the medical knowledge of professionals versus uneducated individuals is huge, and I expect that many people suffer because, although someone knows how they could avoid or solve their medical problems, they don’t.  Computers, aside from being really nifty, are also incredibly useful, but learning to use them well is challenging enough that a lot of people, especially older people, don’t or can’t.

(That being said, nearly everyone in Western nations benefits from living here and now, instead of in an agricultural village 4000 years ago. Think of the complexity embodied in the justice system and the health care system, both of which make life easier and safer for nearly everyone regardless of whether they actually train as professionals in those domains. But people don’t benefit as much as they could.)

Is there any way to avoid this? It’s probably impossible for an individual to have even superficial understanding in every domain of knowledge, much less the level of understanding required to benefit from that knowledge. Just keeping up with day-to-day life (managing finances, holding a job, and trying to socialize in an environment vastly different from the ancestral one) can be trying, especially for individuals on the lower end of the IQ bell curve. (I hate the idea of intelligence, something not under the individual’s control and thus unfair-seeming, being that important to success, but I’m pretty sure it’s true.) This might be why so many people are unhappy. Without regressing to a less complex kind of society, is there anything we can do?

I think the answer is quite clear, because even as societies become more complex, the arrow of daily-life-difficulty-level doesn’t always go in the same direction. There are various examples of this; computers becoming more user-friendly with time, for example. But I’ll use an example that comes readily to mind for me: automated external defibrillators, or AEDs.

A defibrillator uses electricity to interrupt an abnormal heart rhythm (ventricular fibrillation is the typical example, thus de-fibrillation). External means that the device acts from outside the patient’s body (pads with electrodes on the skin) rather than being implanted. Most defibrillators require training to use and can cause a lot of harm if they’re used wrong. The automated part is what changes this. AEDs will analyze a patient’s heart rhythm, and they will only shock if it is necessary. They have colorful diagrams and recorded verbal instructions. There’s probably a way to use an AED wrong, but you would have to be very creative to find it. Needless to say, the technology involved is ridiculously complex and took years to develop, but you don’t need to understand the science involved in order to use an AED. You probably don’t even need to read. The complexity is neatly hidden away; all that matters is that someone knows it. There weren't necessarily any ground-breaking innovations involved, just the knowledge of old inventions in a user-friendly format.

The difference is intelligence. An AED has some limited artificial intelligence in it, programmed in by people who knew what they were talking about, which is why it can replace the decision process that would otherwise be made by medical professionals. A book contains knowledge, but has to be read and interpreted in its entirety by a human brain. A device that has its own small brain doesn’t. This is probably the route where our society is headed if the arrow of (technological) complexity keeps going up. Societies need to be livable for human beings.

That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring. Which could be the main reason for exploring human cognitive enhancement, but that’s a whole different story.

Rationalist sites worth archiving?

22 gwern 11 September 2011 03:24PM

One of my long-standing interests is in writing content that will age gracefully, but as a child of the Internet, I am addicted to linking and linkrot is profoundly threatening to me, so another interest of mine is in archiving URLs; my current methodology is a combination of archiving my browsing in public archives like Internet Archive and locally, and proactively archiving entire sites. Anyway, sites I have previously archived in part or in total include:

  1. LessWrong (I may've caused some downtime here, sorry about that)
  2. OvercomingBias
  3. SL4
  4. Chronopause.com
  5. Yudkowsky.net (in progress)
  6. Singinst.org
  7. PredictionBook.com (for obvious reasons)
  8. LongBets.org & LongNow.org
  9. Intrade.com
  10. Commonsenseatheism.com
  11. finney.org
  12. nickbostrom.com
  13. unenumerated.blogspot.com & http://szabo.best.vwh.net/
  14. weidai.com
  15. mattmahoney.net
  16. aibeliefs.blogspot.com

Having recently added WikiWix to my archival bot, I was thinking of re-running various sites, and I'd like to know - what other LW-related websites are there that people would like to be able to access somewhere in 30 or 40 years?

(This is an important long-term issue, and I don't want to miss any important sites, so I am posting this as an Article rather than the usual Discussion. I already regret not archiving Robert Bradbury's full personal website - having only his Matrioshka Brains article - and do not wish to repeat the mistake.)

A Transhumanist Poem

12 Swimmer963 05 March 2011 09:16AM

**Note: I'm not a poet. I hardly ever write poetry, and when I do, it's usually because I've stayed up all night. However, this seemed like a very appropriate poem for Less Wrong. Not sure if it's appropriate as a top-level post. Someone please tell me if not.**

 

Imagine

The first man

Who held a stick in rough hands

And drew lines on a cold stone wall

Imagine when the others looked

When they said, I see the antelope

I see it. 

 

Later on their children's children

Would build temples, and sing songs

To their many-faced gods.

Stone idols, empty staring eyes

Offerings laid on a cold stone altar

And left to rot. 

 

Yet later still there would be steamships

And trains, and numbers to measure the stars

Small suns ignited in the desert

One man's first step on an airless plain

 

Now we look backwards

At the ones who came before us

Who lived, and swiftly died. 

The first man's flesh is in all of us now

And for his and his children's sake

We imagine a world with no more death

And we see ourselves reflected

In the silicon eyes

Of our final creation

Research methods

13 Swimmer963 22 February 2011 06:10AM

I think I’ve always had certain stereotypes in my mind about research. I imagine a cutting-edge workplace, maybe not using the newest gadgets because these things cost money, but at least using the newest ideas. I imagine staff of research institutions applying the scientific method to boost their own productivity, instead of taking for granted the way that things have always been done. Maybe those were the naive ideas of someone who had never actually worked in a research field. 

At the medical research institute where I work one day a week, I recently spent an entire seven-hour day going down a list of patient names, searching them on the hospital database, deciding whether they met the criteria for a study, and typing them into a colour-coded spreadsheet. The process had maybe six discrete steps, and all of them were purely mechanical. In seven hours, I screened about two hundred and fifty patients. I was paid $12.50 an hour to do this. It cost my employer 35 cents for each patient that I screened, and these patients haven't been visited, consented or included in any study. They're still only names on a spreadsheet. I’ve been told that I learn and work quickly, but I know I do this task inefficiently, because I’m not a simple computer program. I get bored. I make mistakes. Heaven forbid, I get distracted and start reading the nurses’ notes for fun because I find them interesting.

In 7 hours, I imagine that someone slightly above my skill level could write a simple program to do the same task. They wouldn’t screen any patients in those 7 hours, but once the program was finished, they could use it forever, or at least until the task changed and the program had to be modified. I don’t know how much it would cost the organization to employ a programmer; maybe it would cost more than just having me do it. I don’t know whether allowing that program to access the confidential database would be an issue. But it seems inefficient to pay human brains to do work that they’re bad at, that computers would be better at, even if those human brains belong to undergrad students who need the money badly enough not to complain.

One of the criteria I looked at when screening patients was whether they did their dialysis at a clinic in my hometown. They have to be driving distance, because my supervisor has to drive around the city and pick up blood samples to bring to our lab. I crossed out 30 names without even looking them up because I could see at a glance that they were a nearby city an hour’s drive away. How hard would it be to coordinate with the hospital in that city? Have the bloodwork analyzed there and the results emailed over? Maybe it would be non-trivially hard; I don’t know. I didn’t ask my supervisor because it isn’t my job to make management decisions. But medical research benefits everyone. A study with more patients produces data that’s statistically more valid, even if those patients live an hour’s drive away.

The office where I work is filled with paper. Floor-to-ceiling shelves hold endless binders full of source documents. Every email has to be printed and filed in a binder. Even the nurses’ notes and patient charts are printed off the database. It’s a legal requirement. The result is that we have two copies of everything, one online and one on paper, consuming trees. Running a computer consumes fossil fuels, of course. I don’t know for sure which is more efficient, paper or digital, but I do know that both is inefficient. I did ask my supervisor about this, and apparently it’s because digital records could be lost or deleted. How much would it take to make them durable enough?

I guess that more than my supervisor, I see a future where software will do my job, where technology allows a study to be coordinated across the whole world, where digital storage will be reliable enough. But how long will it take for the laws and regulations to change? For people to change? I don’t know how many of my complaints are valid. Maybe this is the optimal way to do research, but it doesn’t feel like it. It feels like a papier-mâché of laws and habits and trial-and-error. It doesn't feel planned.