http://www.newyorker.com/reporting/2013/07/29/130729fa_fact_gawande?currentPage=all

Seems related to many topics discussed on LW, such as the low adoption of cryonics and the difficulty of getting researchers convinced of AI risk.

Four weeks later, on November 18th, Bigelow published his report on the discovery of “insensibility produced by inhalation” in the Boston Medical and Surgical Journal. Morton would not divulge the composition of the gas, which he called Letheon, because he had applied for a patent. But Bigelow reported that he smelled ether in it (ether was used as an ingredient in certain medical preparations), and that seems to have been enough. The idea spread like a contagion, travelling through letters, meetings, and periodicals. By mid-December, surgeons were administering ether to patients in Paris and London. By February, anesthesia had been used in almost all the capitals of Europe, and by June in most regions of the world. [...] Within seven years, virtually every hospital in America and Britain had adopted the new discovery. [...]

Sepsis—infection—was the other great scourge of surgery. It was the single biggest killer of surgical patients, claiming as many as half of those who underwent major operations, such as a repair of an open fracture or the amputation of a limb. [...]

During the next few years, he perfected ways to use carbolic acid for cleansing hands and wounds and destroying any germs that might enter the operating field. The result was strikingly lower rates of sepsis and death. You would have thought that, when he published his observations in a groundbreaking series of reports in The Lancet, in 1867, his antiseptic method would have spread as rapidly as anesthesia.

Far from it. The surgeon J. M. T. Finney recalled that, when he was a trainee at Massachusetts General Hospital two decades later, hand washing was still perfunctory. Surgeons soaked their instruments in carbolic acid, but they continued to operate in black frock coats stiffened with the blood and viscera of previous operations—the badge of a busy practice. Instead of using fresh gauze as sponges, they reused sea sponges without sterilizing them. It was a generation before Lister’s recommendations became routine and the next steps were taken toward the modern standard of asepsis—that is, entirely excluding germs from the surgical field, using heat-sterilized instruments and surgical teams clad in sterile gowns and gloves. [...]

Did the spread of anesthesia and antisepsis differ for economic reasons? Actually, the incentives for both ran in the right direction. If painless surgery attracted paying patients, so would a noticeably lower death rate. Besides, live patients were more likely to make good on their surgery bill. Maybe ideas that violate prior beliefs are harder to embrace. To nineteenth-century surgeons, germ theory seemed as illogical as, say, Darwin’s theory that human beings evolved from primates. Then again, so did the idea that you could inhale a gas and enter a pain-free state of suspended animation. Proponents of anesthesia overcame belief by encouraging surgeons to try ether on a patient and witness the results for themselves—to take a test drive. When Lister tried this strategy, however, he made little progress. [...]

The technical complexity might have been part of the difficulty. [...] But anesthesia was no easier. [...]

So what were the key differences? First, one combatted a visible and immediate problem (pain); the other combatted an invisible problem (germs) whose effects wouldn’t be manifest until well after the operation. Second, although both made life better for patients, only one made life better for doctors. Anesthesia changed surgery from a brutal, time-pressured assault on a shrieking patient to a quiet, considered procedure. Listerism, by contrast, required the operator to work in a shower of carbolic acid. Even low dilutions burned the surgeons’ hands. You can imagine why Lister’s crusade might have been a tough sell.

This has been the pattern of many important but stalled ideas. They attack problems that are big but, to most people, invisible; and making them work can be tedious, if not outright painful. The global destruction wrought by a warming climate, the health damage from our over-sugared modern diet, the economic and social disaster of our trillion dollars in unpaid student debt—these things worsen imperceptibly every day. Meanwhile, the carbolic-acid remedies to them, all requiring individual sacrifice of one kind or another, struggle to get anywhere. [...]

The staff members I met in India had impressive experience. Even the youngest nurses had done more than a thousand child deliveries. [...] But then we hung out in the wards for a while. In the delivery room, a boy had just been born. He and his mother were lying on a cot, bundled under woollen blankets, resting. The room was coffin-cold; I was having trouble feeling my toes. [...] Voluminous evidence shows that it is far better to place the child on the mother’s chest or belly, skin to skin, so that the mother’s body can regulate the baby’s until it is ready to take over. Among small or premature babies, kangaroo care (as it is known) cuts mortality rates by a third.

So why hadn’t the nurse swaddled the two together? [...]

“The mother didn’t want it,” she explained. “She said she was too cold.”

The nurse seemed to think it was strange that I was making such an issue of this. The baby was fine, wasn’t he? And he was. He was sleeping sweetly, a tightly wrapped peanut with a scrunched brown face and his mouth in a lowercase “o.” [...]

Everything about the life the nurse leads—the hours she puts in, the circumstances she endures, the satisfaction she takes in her abilities—shows that she cares. But hypothermia, like the germs that Lister wanted surgeons to battle, is invisible to her. We picture a blue child, suffering right before our eyes. That is not what hypothermia looks like. It is a child who is just a few degrees too cold, too sluggish, too slow to feed. It will be some time before the baby begins to lose weight, stops making urine, develops pneumonia or a bloodstream infection. Long before that happens—usually the morning after the delivery, perhaps the same night—the mother will have hobbled to an auto-rickshaw, propped herself beside her husband, held her new baby tight, and ridden the rutted roads home.

From the nurse’s point of view, she’d helped bring another life into the world. If four per cent of the newborns later died at home, what could that possibly have to do with how she wrapped the mother and child? Or whether she washed her hands before putting on gloves? Or whether the blade with which she cut the umbilical cord was sterilized? [...]

A decade after the landmark findings, the idea remained stalled. Nothing much had changed. Diarrheal disease remained the world’s biggest killer of children under the age of five.

In 1980, however, a Bangladeshi nonprofit organization called brac decided to try to get oral rehydration therapy adopted nationwide. The campaign required reaching a mostly illiterate population. The most recent public-health campaign—to teach family planning—had been deeply unpopular. The messages the campaign needed to spread were complicated.

Nonetheless, the campaign proved remarkably successful. A gem of a book published in Bangladesh, “A Simple Solution,” tells the story. The organization didn’t launch a mass-media campaign—only twenty per cent of the population had a radio, after all. It attacked the problem in a way that is routinely dismissed as impractical and inefficient: by going door to door, person by person, and just talking. [...]

They recruited teams of fourteen young women, a cook, and a male supervisor, figuring that the supervisor would protect them from others as they travelled, and the women’s numbers would protect them from the supervisor. They travelled on foot, pitched camp near each village, fanned out door to door, and stayed until they had talked to women in every hut. They worked long days, six days a week. Each night after dinner, they held a meeting to discuss what went well and what didn’t and to share ideas on how to do better. Leaders periodically debriefed them, as well. [...]

The program was stunningly successful. Use of oral rehydration therapy skyrocketed. The knowledge became self-propagating. The program had changed the norms. [...]

As other countries adopted Bangladesh’s approach, global diarrheal deaths dropped from five million a year to two million, despite a fifty-per-cent increase in the world’s population during the past three decades. Nonetheless, only a third of children in the developing world receive oral rehydration therapy. Many countries tried to implement at arm’s length, going “low touch,” without sandals on the ground. As a recent study by the Gates Foundation and the University of Washington has documented, those countries have failed almost entirely. People talking to people is still how the world’s standards change.

Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.

In childbirth, we have only begun to accept that the critical practices aren’t going to spread themselves. Simple “awareness” isn’t going to solve anything. We need our sales force and our seven easy-to-remember messages. And in many places around the world the concerted, person-by-person effort of changing norms is under way."

I recently asked BetterBirth workers in India whether they’d yet seen a birth attendant change what she does. Yes, they said, but they’ve found that it takes a while. They begin by providing a day of classroom training for birth attendants and hospital leaders in the checklist of practices to be followed. Then they visit them on site to observe as they try to apply the lessons. [...]

Sister Seema Yadav, a twenty-four-year-old, round-faced nurse three years out of school, was one of the trainers. [...] Her first assignment was to follow a thirty-year-old nurse with vastly more experience than she had. Watching the nurse take a woman through labor and delivery, she saw how little of the training had been absorbed. [...] By the fourth or fifth visit, their conversations had shifted. They shared cups of chai and began talking about why you must wash hands even if you wear gloves (because of holes in the gloves and the tendency to touch equipment without them on), and why checking blood pressure matters (because hypertension is a sign of eclampsia, which, when untreated, is a common cause of death among pregnant women). They learned a bit about each other, too. Both turned out to have one child—Sister Seema a four-year-old boy, the nurse an eight-year-old girl. [...]

Soon, she said, the nurse began to change. After several visits, she was taking temperatures and blood pressures properly, washing her hands, giving the necessary medications—almost everything. Sister Seema saw it with her own eyes.

She’d had to move on to another pilot site after that, however. And although the project is tracking the outcomes of mothers and newborns, it will be a while before we have enough numbers to know if a difference has been made. So I got the nurse’s phone number and, with a translator to help with the Hindi, I gave her a call.

It had been four months since Sister Seema’s visit ended. I asked her whether she’d made any changes. Lots, she said. [...]

She said that she had eventually begun to see the effects. Bleeding after delivery was reduced. She recognized problems earlier. She rescued a baby who wasn’t breathing. She diagnosed eclampsia in a mother and treated it. You could hear her pride as she told her stories.

Many of the changes took practice for her, she said. She had to learn, for instance, how to have all the critical supplies—blood-pressure cuff, thermometer, soap, clean gloves, baby respiratory mask, medications—lined up and ready for when she needed them; how to fit the use of them into her routine; how to convince mothers and their relatives that the best thing for a child was to be bundled against the mother’s skin. But, step by step, Sister Seema had helped her to do it. “She showed me how to get things done practically,” the nurse said.

“Why did you listen to her?” I asked. “She had only a fraction of your experience.”

In the beginning, she didn’t, the nurse admitted. “The first day she came, I felt the workload on my head was increasing.” From the second time, however, the nurse began feeling better about the visits. She even began looking forward to them.

“Why?” I asked.

All the nurse could think to say was “She was nice.”

“She was nice?”

“She smiled a lot.”

“That was it?”

“It wasn’t like talking to someone who was trying to find mistakes,” she said. “It was like talking to a friend.”

That, I think, was the answer. Since then, the nurse had developed her own way of explaining why newborns needed to be warmed skin to skin. She said that she now tells families, “Inside the uterus, the baby is very warm. So when the baby comes out it should be kept very warm. The mother’s skin does this.”

I hadn’t been sure if she was just telling me what I wanted to hear. But when I heard her explain how she’d put her own words to what she’d learned, I knew that the ideas had spread. “Do the families listen?” I asked.

“Sometimes they don’t,” she said. “Usually, they do.”

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 6:01 PM

Missing summary:

Best practices spread better when there is a buy-in from the practitioners. Some conditions to get it:

  • the positive effect is immediate (pain relief)
  • the practitioner benefits emotionally, not just financially (antisepsis: doctors as scientists)
  • the benefit is explained and demonstrated in a friendly and clear way (door-to-door rehydration education, one-on-one nurse training)

I am not at all sure that these lessons are transferable to cryo or AI risk advocacy.

I am not at all sure that these lessons are transferable to cryo or AI risk advocacy.

I felt that the main transferable lesson was the broader point about a change in habits requiring a change in the overall culture. Sometimes you can do it with friendly door-to-door education, but sometimes it requires a broader shift, as with the adoption of antisepsis. That seems like rough evidence of MIRI's and CFAR's efforts at building cultures of thinking about these things in a new manner being a strategy worth pursuing. This article caused me to assign a considerably greater probability to the possibility of CFAR having a major effect than I'd done before.

Also some obvious parallels in that e.g. taking steps to increase AI safety doesn't really provide emotional benefits to current AI researchers, nor does the thought of cryonics provide emotional benefits to most of the people considering signing up, though those points might be relatively well-understood here already.

the practitioner benefits emotionally, not just financially (antisepsis: doctors as scientists)

I would guess that you feel emotionally better if less if your patients die.

[-][anonymous]10y-20

A study conducted in 2007[27] sought to determine why people believe they share emotional episodes. According to self reports by participants, there are several main reasons why people initiate social sharing behaviors (in no particular order):

Rehearse—to remember or re-experience the event
Vent—to express or alleviate pent-up emotions, to attempt catharsis
Obtain help, support, and comfort—to receive consolation and sympathy
Legitimization—to validate one’s emotions of the event and have them approved
Clarification and meaning—to clarify certain aspects of the event that were not well understood, to find meaning in the happenings of the event
Advice—to seek guidance and find solutions to problems created by the event
Bonding—to become closer to others and reduce feelings of loneliness
Empathy—to emotionally arouse or touch the listener
Draw attention—to receive attention from others, possibly to impress others
Entertain—to engage others and facilitate social interactions[4]

I think the friendly person-to-person part could apply to cryo.

There's at least one more thing to add to your summary. Test, test, test. Admittedly, this wasn't part of the history of every idea that's spread, but it helped a lot with the rehydration project.

Friendly person to person part also applies to accepting Jesus as your lord and saviour.

I don't see an important societal level benefit from promoting cryo. The money spent on it are best used elsewhere. Especially as the younger lives that you save now are likely to live till indefinite life extension, under assumptions common among cryonics proponents.

And in any case, those who sign up already try to convince as many others as they can, to keep their cryo provider afloat or fund experiments.

You mention only two concrete topics which have a hard time Therefore I only address these:

Adoption of cryonics

This is one level more difficult than antisepsis because on top of the burdon on the user (financially) there is no observable benefit at all but only a potential future one. And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.

"Prediction is very difficult, especially if it's about the future." --Nils Bohr

There may be good reasons for it but if these require a complex model of the world to understand, then it may look from the outside like a cult you have to buy in and then cryonics looks not much different then other afterlife memes you have to buy.

So until the predictions of the future become evidently plausible or generally accepted you will have difficulties converting laymen - except those that would also bet on pascals wager e.g. a construction that posits very high gains on very small chances. And if you convert these first you will look even more like a cult.

Thus my recommendation is to first convice experts to use cryonics. They are more likely to really understand the predictions. If they sign up for cryonics they will likely spread the word among collegues and that will be your audience.

the difficulty of getting researchers convinced of AI risk.

This is more amenable to the proposed/implied approaches in the post because it explicitly addresses researchers. But it also suffers from the abstract risk. By structure this should also apply to all other extreme risk szenarios.

I wonder if there are success stories like the ones from the article about protection against other extreme but at least really occurring risks like earthquakes, tsunamis, volcanism etc. I seem to remember that tsunami protection was not successfully applied everywhere. Some mention of the Phillipines?

How do you convince a researcher if the risk is so high? I think the difficulty is not in getting a scientist to understand that UFAI can wreak the greatest havoc. The point to bring across is that UFAI is not a hypothetical construction (like the devil from religion which also needs to be believed in) but a construction that can really come about by a reasonable technological path.

And I don't see this path clearly. I see this runaway recursive self-improvement argument. But in the end that is not different from invoking 'emergence'. One needs to quantify this recursive process. But as far as I can tell from Why AI may not foom and esp. the comment http://lesswrong.com/lw/gn3/why_ai_may_not_foom/8nk4 modelling with differential equations seems to be actually avoided. Instead it is appealed to an unmodellability. That I find disturbing to say the least.

And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.

And that the sufficiently advanced technology won't destroy the world.

As a case study of extreme-risk prevention success, the Castro's Cuba has had almost no hurricane deaths for the entire tenure, IIRC. This was probably more based on structural preparedness than getting buy-in, but might be worth a look anyway.

[This comment is no longer endorsed by its author]Reply