ChristianKl comments on MIRI strategy - Less Wrong

5 Post author: ColonelMustard 28 October 2013 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 28 October 2013 06:24:28PM *  36 points [-]
  • Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
  • Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
  • Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
  • I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
Comment author: ColonelMustard 29 October 2013 12:44:57PM *  1 point [-]

Thanks, Luke. This is an informative reply, and it's great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?

It's also great to hear that MIRI has tried one pamphlet. I would agree that "This one pamphlet we tried didn't work" points us in the direction that "No pamphlet MIRI can produce will accomplish much", but that proposition is far from certain. I'd still be interested in the general case of "Can MIRI reduce the chance of UFAI x-risk through pamphlets?"

Pamphlets...don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this), and anything that can do that process in a manner that scales well would have a huge impact. What's the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I've sent to a number of people with the instruction "set aside what you're doing and read this"). I definitely agree that they filter nicely for "able to think". But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.

Comment author: ChristianKl 29 October 2013 06:33:26PM 0 points [-]

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this),

For what value of "taking seriously" is that statement true?

Comment author: ColonelMustard 30 October 2013 01:26:20AM 0 points [-]

"Hear ridiculous-sounding proposition, mark it as ridiculous, engage explanation, begin to accept arguments, begin to worry about this, agree to look at further reading"