In response to MIRI strategy
Comment author: Vladimir_Nesov 28 October 2013 06:07:59PM 14 points [-]

Facing the Intelligence Explosion is a nontechnical introduction.

Comment author: ColonelMustard 29 October 2013 12:50:24PM 0 points [-]

I agree and I like it. I think it could be further optimised for "convince intelligent non-LWers who have been sent one link from their rationalist friends and will read only that one link", but it could definitely serve as a great starting point.

In response to comment by lukeprog on MIRI strategy
Comment author: pslunch 29 October 2013 03:43:33AM 6 points [-]

I would hesitate to use failure during "SIAI's early years" to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).

While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.

In response to comment by pslunch on MIRI strategy
Comment author: ColonelMustard 29 October 2013 12:46:55PM 2 points [-]

I agree and would also add that "Eliezer failed in 2001 to convince many people" does not imply "Eliezer in 2013 is incapable of persuading people". From his writings, I understand he has changed his views considerably in the last dozen years.

In response to MIRI strategy
Comment author: lukeprog 28 October 2013 06:24:28PM *  36 points [-]
  • Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
  • Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
  • Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
  • I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
In response to comment by lukeprog on MIRI strategy
Comment author: ColonelMustard 29 October 2013 12:44:57PM *  1 point [-]

Thanks, Luke. This is an informative reply, and it's great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?

It's also great to hear that MIRI has tried one pamphlet. I would agree that "This one pamphlet we tried didn't work" points us in the direction that "No pamphlet MIRI can produce will accomplish much", but that proposition is far from certain. I'd still be interested in the general case of "Can MIRI reduce the chance of UFAI x-risk through pamphlets?"

Pamphlets...don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this), and anything that can do that process in a manner that scales well would have a huge impact. What's the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I've sent to a number of people with the instruction "set aside what you're doing and read this"). I definitely agree that they filter nicely for "able to think". But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.

Comment author: shminux 28 October 2013 05:01:27AM 4 points [-]

You don't have enough karma to post yet. Consider making some quality comments first.

Comment author: ColonelMustard 28 October 2013 08:44:11AM 3 points [-]

Thank you! One more - how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?

Comment author: ColonelMustard 28 October 2013 04:20:24AM 3 points [-]

Not sure where this goes: how can I submit an article to discussion? I've written it and saved it as a draft, but I haven't figured out a way to post it.

Comment author: ColonelMustard 07 October 2013 02:32:06AM 4 points [-]

I am thinking of writing a discussion thread to propose MIRI make it a priority to create a (video/pamphlet/blog post), tailored to intelligent non-rationalists and with as little jargon as possible (e.g. no terms like Kolmogorov complexity), to explain the dangers of UFAI. Please upvote this comment if you think LW is better with such a post, because I have zero karma.

Comment author: ColonelMustard 07 October 2013 02:27:20AM -1 points [-]

Great damage is usually caused by those who are too scrupulous to do small harm.

View more: Prev