Missing summary:
Best practices spread better when there is a buy-in from the practitioners. Some conditions to get it:
I am not at all sure that these lessons are transferable to cryo or AI risk advocacy.
I am not at all sure that these lessons are transferable to cryo or AI risk advocacy.
I felt that the main transferable lesson was the broader point about a change in habits requiring a change in the overall culture. Sometimes you can do it with friendly door-to-door education, but sometimes it requires a broader shift, as with the adoption of antisepsis. That seems like rough evidence of MIRI's and CFAR's efforts at building cultures of thinking about these things in a new manner being a strategy worth pursuing. This article caused me to assign a considerably greater probability to the possibility of CFAR having a major effect than I'd done before.
Also some obvious parallels in that e.g. taking steps to increase AI safety doesn't really provide emotional benefits to current AI researchers, nor does the thought of cryonics provide emotional benefits to most of the people considering signing up, though those points might be relatively well-understood here already.
the practitioner benefits emotionally, not just financially (antisepsis: doctors as scientists)
I would guess that you feel emotionally better if less if your patients die.
A study conducted in 2007[27] sought to determine why people believe they share emotional episodes. According to self reports by participants, there are several main reasons why people initiate social sharing behaviors (in no particular order):
Rehearse—to remember or re-experience the event
Vent—to express or alleviate pent-up emotions, to attempt catharsis
Obtain help, support, and comfort—to receive consolation and sympathy
Legitimization—to validate one’s emotions of the event and have them approved
Clarification and meaning—to clarify certain aspects of the event that were not well understood, to find meaning in the happenings of the event
Advice—to seek guidance and find solutions to problems created by the event
Bonding—to become closer to others and reduce feelings of loneliness
Empathy—to emotionally arouse or touch the listener
Draw attention—to receive attention from others, possibly to impress others
Entertain—to engage others and facilitate social interactions[4]
I think the friendly person-to-person part could apply to cryo.
There's at least one more thing to add to your summary. Test, test, test. Admittedly, this wasn't part of the history of every idea that's spread, but it helped a lot with the rehydration project.
Friendly person to person part also applies to accepting Jesus as your lord and saviour.
I don't see an important societal level benefit from promoting cryo. The money spent on it are best used elsewhere. Especially as the younger lives that you save now are likely to live till indefinite life extension, under assumptions common among cryonics proponents.
And in any case, those who sign up already try to convince as many others as they can, to keep their cryo provider afloat or fund experiments.
You mention only two concrete topics which have a hard time Therefore I only address these:
Adoption of cryonics
This is one level more difficult than antisepsis because on top of the burdon on the user (financially) there is no observable benefit at all but only a potential future one. And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.
"Prediction is very difficult, especially if it's about the future." --Nils Bohr
There may be good reasons for it but if these require a complex model of the world to understand, then it may look from the outside like a cult you have to buy in and then cryonics looks not much different then other afterlife memes you have to buy.
So until the predictions of the future become evidently plausible or generally accepted you will have difficulties converting laymen - except those that would also bet on pascals wager e.g. a construction that posits very high gains on very small chances. And if you convert these first you will look even more like a cult.
Thus my recommendation is to first convice experts to use cryonics. They are more likely to really understand the predictions. If they sign up for cryonics they will likely spread the word among collegues and that will be your audience.
the difficulty of getting researchers convinced of AI risk.
This is more amenable to the proposed/implied approaches in the post because it explicitly addresses researchers. But it also suffers from the abstract risk. By structure this should also apply to all other extreme risk szenarios.
I wonder if there are success stories like the ones from the article about protection against other extreme but at least really occurring risks like earthquakes, tsunamis, volcanism etc. I seem to remember that tsunami protection was not successfully applied everywhere. Some mention of the Phillipines?
How do you convince a researcher if the risk is so high? I think the difficulty is not in getting a scientist to understand that UFAI can wreak the greatest havoc. The point to bring across is that UFAI is not a hypothetical construction (like the devil from religion which also needs to be believed in) but a construction that can really come about by a reasonable technological path.
And I don't see this path clearly. I see this runaway recursive self-improvement argument. But in the end that is not different from invoking 'emergence'. One needs to quantify this recursive process. But as far as I can tell from Why AI may not foom and esp. the comment http://lesswrong.com/lw/gn3/why_ai_may_not_foom/8nk4 modelling with differential equations seems to be actually avoided. Instead it is appealed to an unmodellability. That I find disturbing to say the least.
And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.
And that the sufficiently advanced technology won't destroy the world.
As a case study of extreme-risk prevention success, the Castro's Cuba has had almost no hurricane deaths for the entire tenure, IIRC. This was probably more based on structural preparedness than getting buy-in, but might be worth a look anyway.
http://www.newyorker.com/reporting/2013/07/29/130729fa_fact_gawande?currentPage=all
Seems related to many topics discussed on LW, such as the low adoption of cryonics and the difficulty of getting researchers convinced of AI risk.