Summary: I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern. Creating and distributing this document should be MIRI’s top priority.
If you want to make sure the first AGI is FAI, one way to do so is to be the first to create an AI, and ensure it is FAI. Another is to persuade people that UFAI is a legitimate concern, and do so in large numbers. Ideally this would become a real concern, so nobody runs into the trap of Eliezer1999ish of “I’m going to build an AI and see how it works”.
1) is tough for an organisation of MIRI’s size. 2) is a realistic goal. It benefits from:
Funding: MIRI’s funding almost certainly goes up if more people are concerned with AI x-risk. Ditto FHI.
Scalability: If MIRI has a new math finding, that's one new theorem. If MIRI creates a convincing demonstration that we have to worry about AI, spreading this message to a million people is plausible.
Partial goal completion: making a math breakthrough that reduces the time to AI might be counter-productive. Persuading an additional person of the dangers of UFAI raises the sanity waterline.
Task difficulty: creating an AI is hard. Persuading people that “UFAI is a possible extinction risk. Take it seriously” is nothing like as difficult. (I was persuaded of this in about 20 minutes of conversation.)
One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?
Another possible response is “We have done this, and it's available on our website. Read the Five Theses”. To which I reply: Is this is in the ideal form to persuade a McKinsey consultant who’s never read Less Wrong? If an entrepreneur with net worth $20m but no math background wants to donate to the most efficient charity he finds, would he be convinced? What efforts has MIRI made to test the hypothesis that the Five Theses, or Evidence and Import, or any other document, has been tailored to optimise the chance of convincing others?
(Further – if MIRI _does_ think this is as persuasive as it can possibly be, why doesn't it shift focus to get the Five Theses read by as many people as possible?)
Here’s one way to go about accomplishing this. Write up an explanation of the concerns MIRI has and how it is trying to allay them, and do so in clear English. (The Five Theses are available in Up-Goer Five form. Writing them in language readable by the average college graduate should be a cinch compared to that). Send it out to a few of the target market and find the points that could be expanded, clarified, or made more convinced. Maybe provide two versions and see which one gets the most positive response. Continue this process until the document has been through a series of iterations and shows no signs of improvement. Then shift focus to getting that link read by as many people as possible. Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.
Really, it's not. Tons of people discuss politics without getting their briefs in a knot about it. It's only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn't that common elseways. People, on large, are willing to seriously debate political issues. "Politics is the mind-killer" is a result of some pretty severe selection bias.
Even ignoring that, you've only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can't get the concept to penetrate the academia where AI is likely to be developed, we're not yet mitigating the threat. A thousand angry letters demanding this research, "Stop at once," or, "Address the issue of friendliness," isn't something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who've argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky's argument has gained widespread attention, but if it pressures them to properly address Yudkowsky's arguments, then it has legitimately helped.
There's a reason why it's general advice to not talk about religion, sex and politics. It's not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn't the only failure mode of politics mindkill. I don't even think... (read more)