- Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
- Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
- Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
- I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
Thanks, Luke. This is an informative reply, and it's great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?
It's also great to hear that MIRI has tried one pamphlet. I would agree that "This one pamphlet we tried didn't work" points us in the direction that "No pamphlet MIRI can produce will accomplish much", but that proposition is far from certain. I'd still be interested in the general case of "Can MIRI reduce the chance of UFAI x-risk through pamphlets?"
Pamphlets...don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this), and anything that can do that process in a manner that scales well would have a huge impact. What's the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I've sent to a number of people with the instruction "set aside what you're doing and read this"). I definitely agree that they filter nicely for "able to think". But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.
MIRI strategy
Summary: I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern. Creating and distributing this document should be MIRI’s top priority.
If you want to make sure the first AGI is FAI, one way to do so is to be the first to create an AI, and ensure it is FAI. Another is to persuade people that UFAI is a legitimate concern, and do so in large numbers. Ideally this would become a real concern, so nobody runs into the trap of Eliezer1999ish of “I’m going to build an AI and see how it works”.
1) is tough for an organisation of MIRI’s size. 2) is a realistic goal. It benefits from:
Funding: MIRI’s funding almost certainly goes up if more people are concerned with AI x-risk. Ditto FHI.
Scalability: If MIRI has a new math finding, that's one new theorem. If MIRI creates a convincing demonstration that we have to worry about AI, spreading this message to a million people is plausible.
Partial goal completion: making a math breakthrough that reduces the time to AI might be counter-productive. Persuading an additional person of the dangers of UFAI raises the sanity waterline.
Task difficulty: creating an AI is hard. Persuading people that “UFAI is a possible extinction risk. Take it seriously” is nothing like as difficult. (I was persuaded of this in about 20 minutes of conversation.)
One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?
Another possible response is “We have done this, and it's available on our website. Read the Five Theses”. To which I reply: Is this is in the ideal form to persuade a McKinsey consultant who’s never read Less Wrong? If an entrepreneur with net worth $20m but no math background wants to donate to the most efficient charity he finds, would he be convinced? What efforts has MIRI made to test the hypothesis that the Five Theses, or Evidence and Import, or any other document, has been tailored to optimise the chance of convincing others?
(Further – if MIRI _does_ think this is as persuasive as it can possibly be, why doesn't it shift focus to get the Five Theses read by as many people as possible?)
Here’s one way to go about accomplishing this. Write up an explanation of the concerns MIRI has and how it is trying to allay them, and do so in clear English. (The Five Theses are available in Up-Goer Five form. Writing them in language readable by the average college graduate should be a cinch compared to that). Send it out to a few of the target market and find the points that could be expanded, clarified, or made more convinced. Maybe provide two versions and see which one gets the most positive response. Continue this process until the document has been through a series of iterations and shows no signs of improvement. Then shift focus to getting that link read by as many people as possible. Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.
You don't have enough karma to post yet. Consider making some quality comments first.
Thank you! One more - how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?
Not sure where this goes: how can I submit an article to discussion? I've written it and saved it as a draft, but I haven't figured out a way to post it.
I am thinking of writing a discussion thread to propose MIRI make it a priority to create a (video/pamphlet/blog post), tailored to intelligent non-rationalists and with as little jargon as possible (e.g. no terms like Kolmogorov complexity), to explain the dangers of UFAI. Please upvote this comment if you think LW is better with such a post, because I have zero karma.
Great damage is usually caused by those who are too scrupulous to do small harm.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I would hesitate to use failure during "SIAI's early years" to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).
While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.
I agree and would also add that "Eliezer failed in 2001 to convince many people" does not imply "Eliezer in 2013 is incapable of persuading people". From his writings, I understand he has changed his views considerably in the last dozen years.