Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
Didn't you get convinced about AI risk by reading a short paragraph of I. J. Good?
I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern.
It would be far more useful if MIRI provided technical argumentation for its Scary Idea. There are a lot of AGI researchers, myself included, which remain entirely unconvinced. AGI researchers - the people who would actually create an UFAI - are paying attention and not sufficiently convinced to change their behavior. Shouldn't that be of more concern than a non-technical audience?
A decade of effort on EY's part has taken the idea of friendliness mainstream. It is now accepted as fact by most AGI researchers that intelligence and morality are orthonormal concepts, despite contrary intuition, and that even with the best of intentions a powerful, self-modifying AGI could be a dangerous thing. The degree of difference in belief is in the probability assigned to that could.
Has the community responded? Yes. Quite a few mainstream AGI researchers have proposed architectures for friendly AGI, or realistic boxing/oracle setups, or a friendliness analysis of their own AGI design. To my knowledge MIRI has yet to engage with any of these proposals. Why?
I want a believable answer to that before a non-technical pamphlet or video, please.
So here are some more problems I have:
UFAI isn't necessarily about deception. You also have to worry that the AI will perform its assigned task in a way inimical to human values, that jumps through constraints intended to prevent this, through sheer ingenuity... Suppose the AI is designed to do X, something that human beings want, but that humans also care about Y and Z. And suppose the AI isn't designed to intrinsically respect Y and Z. Instead there are constraints C that it knows about, the violation of which is also monitored by human beings, and these constraints are supposed to protect values Y and Z from violation. You have to worry that the AI will achieve X in a way which satisfies C but still violates Y and Z.
Auditing has the potential to slow down the AI - the AI may be paused regularly for forensic analysis and/or it may go slow in order to satisfy the safety constraints. Audited AI projects may be overtaken by others with a different methodology.
You want humans to "take us through the singularity". But we aren't through the singularity until superhuman intelligence exists. Is your plan, therefore, to suppress development of superhuman AI, until there
Overexposure of an idea can be harmful as well. Look at how Kurzweil promoted his idea of the singularity. While many of the ideas (such as intelligence explosion) are solid, people don't take Kurzweil seriously anymore, to a large extent.
It would be useful debating why Kurzweil isn't taken seriously anymore. Is it because of the fraction of wrong predictions? Or is it simply because of the way he's presented them? Answering these questions would be useful to avoid ending up like Kurzweil has.
At least he's been cited: Google Scholar reports 1600+ citations for The Singularity is Near as well as for The Age of Spiritual Machines, his earlier book on the same theme.
Also, if we're talking about him in general, and not just his Singularity-related writings, Wikipedia reports that:
Kurzweil received the 1999 National Medal of Technology and Innovation, America's highest honor in technology, from President Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001,[6] the world's largest for innovation. And in 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received nineteen honorary doctorates, and honors from three U.S. presidents. Kurzweil has been described as a "restless genius"[7] by The Wall Street Journal and "the ultimate thinking machine"[8] by Forbes. PBS included Kurzweil as one of 16 "revolutionaries who made America"[9] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[10]
One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?
Convincing people in Greenpeace that an UFAI presents a risk that they should care about has it's own dangers. There a risk that you associate caring about UFAI with luddites.
If you get a broad public...
Nastier issue: the harder argument of convincing people UFAI is an avoidable risk. If you can't convince people they've got a realistic chance (ie: one they would gamble on, given the possible benefits of FAI) of winning this issue, then it doesn't matter how informed they are.
See: Juergen Schmidhuber's interview on this very website, where we basically says, "We're damn near AI in my lab, and yes, it is a rational optimization process," followed by, "We see no way to prevent the paper-clipping of humanity whatsoever, so we stopped giving a damn and just focus on doing our research."
This post makes me wonder if the relevant information could be compressed into a series of self-contained videos along the lines of MinutePhysics. So far as I can tell most people find video more accessible. (I don't, but I'm an outlier, like most here)
I'm going to guess it's impossible, but I'm not sure if it's Shut Up and Do the Impossible impossible or Just Lose Hope Already impossible.
HPMOR could end with Harry destroying the world through an UFAI. The last chapters already pointed to Harry destroying the world.
Strategically that seems to be the best choice. HPMOR is more viral than some technical document. There already effort invested in getting a lot of people to read HPMOR.
People bond with the characters. Ending the book with, now everyone is dead because an AGI went FOOM let's people take that scenario seriously and that's exactly the right time to tell them: "Hey, this scenario could also happen in our world, so let's do something to prevent it from happening."
Really, it's not. Tons of people discuss politics without getting their briefs in a knot about it. It's only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn't that common elseways.
There's a reason why it's general advice to not talk about religion, sex and politics. It's not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn't the only failure mode of politics mindkill. I don't even think it's the most important one.
You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn't. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it.
If you want to do something you can, earn to give and give money to MIRI.
People researching AI who've argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky's argument has gained widespread attention, but if it pressures them to properly address Yudkowsky's arguments, then it has legitimately helped.
You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you.
UFAI is an important problem but we probably don't have to solve it in the next 5 years. We do have some time to do things right.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the dang...
Summary: I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern. Creating and distributing this document should be MIRI’s top priority.
If you want to make sure the first AGI is FAI, one way to do so is to be the first to create an AI, and ensure it is FAI. Another is to persuade people that UFAI is a legitimate concern, and do so in large numbers. Ideally this would become a real concern, so nobody runs into the trap of Eliezer1999ish of “I’m going to build an AI and see how it works”.
1) is tough for an organisation of MIRI’s size. 2) is a realistic goal. It benefits from:
Funding: MIRI’s funding almost certainly goes up if more people are concerned with AI x-risk. Ditto FHI.
Scalability: If MIRI has a new math finding, that's one new theorem. If MIRI creates a convincing demonstration that we have to worry about AI, spreading this message to a million people is plausible.
Partial goal completion: making a math breakthrough that reduces the time to AI might be counter-productive. Persuading an additional person of the dangers of UFAI raises the sanity waterline.
Task difficulty: creating an AI is hard. Persuading people that “UFAI is a possible extinction risk. Take it seriously” is nothing like as difficult. (I was persuaded of this in about 20 minutes of conversation.)
One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?
Another possible response is “We have done this, and it's available on our website. Read the Five Theses”. To which I reply: Is this is in the ideal form to persuade a McKinsey consultant who’s never read Less Wrong? If an entrepreneur with net worth $20m but no math background wants to donate to the most efficient charity he finds, would he be convinced? What efforts has MIRI made to test the hypothesis that the Five Theses, or Evidence and Import, or any other document, has been tailored to optimise the chance of convincing others?
(Further – if MIRI _does_ think this is as persuasive as it can possibly be, why doesn't it shift focus to get the Five Theses read by as many people as possible?)
Here’s one way to go about accomplishing this. Write up an explanation of the concerns MIRI has and how it is trying to allay them, and do so in clear English. (The Five Theses are available in Up-Goer Five form. Writing them in language readable by the average college graduate should be a cinch compared to that). Send it out to a few of the target market and find the points that could be expanded, clarified, or made more convinced. Maybe provide two versions and see which one gets the most positive response. Continue this process until the document has been through a series of iterations and shows no signs of improvement. Then shift focus to getting that link read by as many people as possible. Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.