Gunnar_Zarncke comments on [link] How do good ideas spread? - Less Wrong

9 Post author: Kaj_Sotala 03 January 2014 08:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: Gunnar_Zarncke 04 January 2014 12:21:14AM 2 points [-]

You mention only two concrete topics which have a hard time Therefore I only address these:

Adoption of cryonics

This is one level more difficult than antisepsis because on top of the burdon on the user (financially) there is no observable benefit at all but only a potential future one. And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.

"Prediction is very difficult, especially if it's about the future." --Nils Bohr

There may be good reasons for it but if these require a complex model of the world to understand, then it may look from the outside like a cult you have to buy in and then cryonics looks not much different then other afterlife memes you have to buy.

So until the predictions of the future become evidently plausible or generally accepted you will have difficulties converting laymen - except those that would also bet on pascals wager e.g. a construction that posits very high gains on very small chances. And if you convert these first you will look even more like a cult.

Thus my recommendation is to first convice experts to use cryonics. They are more likely to really understand the predictions. If they sign up for cryonics they will likely spread the word among collegues and that will be your audience.

the difficulty of getting researchers convinced of AI risk.

This is more amenable to the proposed/implied approaches in the post because it explicitly addresses researchers. But it also suffers from the abstract risk. By structure this should also apply to all other extreme risk szenarios.

I wonder if there are success stories like the ones from the article about protection against other extreme but at least really occurring risks like earthquakes, tsunamis, volcanism etc. I seem to remember that tsunami protection was not successfully applied everywhere. Some mention of the Phillipines?

How do you convince a researcher if the risk is so high? I think the difficulty is not in getting a scientist to understand that UFAI can wreak the greatest havoc. The point to bring across is that UFAI is not a hypothetical construction (like the devil from religion which also needs to be believed in) but a construction that can really come about by a reasonable technological path.

And I don't see this path clearly. I see this runaway recursive self-improvement argument. But in the end that is not different from invoking 'emergence'. One needs to quantify this recursive process. But as far as I can tell from Why AI may not foom and esp. the comment http://lesswrong.com/lw/gn3/why_ai_may_not_foom/8nk4 modelling with differential equations seems to be actually avoided. Instead it is appealed to an unmodellability. That I find disturbing to say the least.

Comment author: ChristianKl 05 January 2014 02:42:48PM 0 points [-]

And that benefit depends on buying in a certain prediction of the future - namely that sufficiently advanced technology is possible and near.

And that the sufficiently advanced technology won't destroy the world.

Comment author: VAuroch 04 January 2014 08:31:09AM -1 points [-]

As a case study of extreme-risk prevention success, the Castro's Cuba has had almost no hurricane deaths for the entire tenure, IIRC. This was probably more based on structural preparedness than getting buy-in, but might be worth a look anyway.