Your environmentalism examples raise another issue. What good is it convincing people of the importance of friendly AI if they respond with similarly ineffective actions? If widespread acceptance of the importance of the environment has led primarily to ineffective behaviours like unplugging phone chargers, washing and sorting containers for recycling and other activities of dubious benefit while at the same time achieving little with regards to reductions in CO2 emissions or slowing the destruction of rainforests then why should we expect widespread accep...
When Eliezer writes about the "miracle" of evolved morality, he reminds me of that bit from H.M.S. Pinafore where the singers are heaping praise on Rafe Rackstraw for being born an Englishman "despite all the temptations to belong to other nations". We can imagine that they might have sung quite a similar song in French.
One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.
A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it's just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror). Making an analogy with a nuclear bomb is simply not...
A couple things come to mind. The first is that we have to figure out how much we value attention from people that are not initially rational about it (as this determines how much Dark Art to use). I can see the extra publicity as helping, but if it gets the cause associated with "nut jobs", then it may pass under the radar of the rational folk and do more harm than good.
The other thing that comes to mind is that this looks like an example of people using "far" reasoning. Learning how to get people to analyze the situation in "near...
If "AI will be dangerous to the world" became a socially accepted factoid you would get it spilling over in all sorts of unintended fashions. It might not be socially acceptable to use Wolfram Alpha as it is too AI-ish,
The simplest way to change public opinion is manually. Skynet seems like an adequate solution to me.
The biggest problem with the movies, besides the inconsistencies as to whether causality is changeable or not, is why Skynet bothers dealing with the humans once it's broken their ability to prevent it from launching itself into space. Sending a single self-replicating seed factory to the Moon is what a reasonable AI would do.
The Terminator movies exploit the primal human fear of being exterminated by a rival tribe, putting AI in the role once filled by extraterrestrials: outsiders with great power who want to destroy all of 'us'. The pattern is tedious and predictable.
As William has pointed out, AI running amok is already a standard trope. In fact, Asimov invented his three laws way back when as a way of getting past the cliche, and writing stories where it wasn't a given that the machine would turn on its creator. But the cliche is still alive and well. Asimov himself had the robots taking over in the end, in "That Thou Art Mindful of Him" and the prequels to the "Foundation" trilogy.
The people that the world needs to take FAI seriously are the people working on AI. That's what, thousands at the most? And surely they have all heard of the issue by now. What is their view on it?
As people's preferences and decisions are concerned, there are trusted tools, trusted analysis methods. People use them, because they know indirectly that their output will be better than what their intuitive gut feeling outputs. And thus, people use complicated engineering practices to build bridges, instead of just drawing a concept and proceeding to fill the image with building material.
But these tools are rarely used to refactor people's minds. People may accept conclusions chosen by experts, and allow them to install policies, as they know this is a p...
My sense is that most people aren't concerned about Skynet for the same reason that they're not concerned about robots, zombies, pirates, faeries, aliens, dragons, and ninjas. (Homework: which of those are things to worry about, and why/why not?)
Also, this article could do without the rant against environmentalism and your roommate. Examples are useful to understanding one's main point, but this article seems to be overwhelmed by its sole example.
You could probably find a clearer title. Naming an article after an example doesn't seem like a good idea to me. Probably the topic changed while you were writing and you didn't notice. (I claim that it is a coherent essay on public opinion.)
Yes, it is important to know how public opinion changes. But before you try to influence it, you should have a good idea of what you're trying to accomplish and whether it's possible. Recycling and unplugging gadgets are daily activities. That continuity is important to making them popular. Is it possible to make insulating houses fashionable?
"The Internet" is probably an interesting case study. It has grown from a very small niche product into a "fundamental right" in a relatively short time. One of the things that probably helped this shift is showing people what the internet could do for them - it became useful. This is understandably a difficult point on which to sell FAI.
Now that that surface analogy is over, how about the teleological analogy? In a way, environmentalism assumes the same mantle as FAI - "do it for the children". Environmentalism has plenty of...
One central problem is that people are constantly deluged with information about incipient crises. The Typical Person can not be expected to understand the difference in risk levels indicated by UFAI vs. bioterror vs. thermonuclear war vs global warming, and this is not even a disparagement of the Typical Person. These risks are just impossible to estimate.
But how can we deal with this multitude of potential disasters? Each disaster has some low probability of occurring, but because there are so many of them (swine flu, nuclear EMP attacks, grey goo, comp...
I agree that an Unfriendly AI could be a complete disaster for the human race. However, I really don't expect to see an AI that goes FOOM during my lifetime. To be frank, I think I'm far more likely to be killed by a civilization-threatening natural disaster, such as an asteroid impact, supervolcano eruption, or megatsunami, than by an Unfriendly AI. As far as I'm concerned, worrying about Unfriendly AI today is like worrying about global warming in 1862, shortly after people began producing fuel from petroleum. Yes, it's a real problem that will have to be solved - but the people alive today aren't going to be the ones that solve it.
Why is this post being voted negative? It's an important problem for plenty of causes of interest to many rationalists, and is well worth discussing here.
Michael Annisimov has put up a website called Terminator Salvation: Preventing Skynet, which will host a series of essays on the topic of human-friendly artificial intelligence. Three rather good essays are already up there, including an old classic by Eliezer. The association with a piece of fiction is probably unhelpful, but the publicity surrounding the new terminator film is probably worth it.
What rational strategies can we employ to maximize the impact of such a site, or of publicity for serious issues in general? Most people who read this site will probably not do anything about it, or will find some reason to not take the content of these essays seriously. I say this because I have personally spoken to a lot of clever people about the creation of human-friendly artificial intelligence, and almost everyone finds some reason to not do anything about the problem, even if that reason is "oh, ok, that's interesting. Anyway, about my new car... ".
What is the reason underlying people's indifference to these issues? My personal suspicion is that most people make decisions in their lives by following what everyone else does, rather than by performing a genuine rational analysis.
Consider the rise in social acceptability of making small personal sacrifices and political decisions based on eco-friendliness and your carbon footprint. Many people I know have become very enthusiastic for recycling used food containers and for unplugging appliances that use trivial amounts of power (for example unused phone chargers and electrical equipment on standby). The real reason that people do these things is that they have become socially accepted factoids. Most people in this world, even in this country, lack the mental faculties and knowledge to understand and act upon an argument involving notions of per capita CO2 emissions; instead they respond, at least in my understanding, to the general climate of acceptable opinion, and to opinion formers such as the BBC news website, which has a whole section for "science and environment". Now, I don't want to single out environmentalism as the only issue where people form their opinions based upon what is socially acceptable to believe, or to claim that reducing our greenhouse gas emissions is not a worthy cause.
Another great example of socially acceptable factoids (though probably a less serious one) is the detox industry - see, for example, this Times article. I quote:
Anyone who takes a serious interest in changing the world would do well to understand the process whereby public opinion as a whole changes on some subject, and attempt to influence that process in an optimal way. How strongly is public opinion correlated with scientific opinion, for example? Particular attention should be paid to the history of the environmentalist movement. See, for example, McKay's Sustainable energy without the hot air for a great example of a rigorous quantitative analysis in support of various ways of balancing our energy supply and demand, and for a great take on the power of socially accepted factoids, see Phone chargers - the Truth.
So I submit to the wisdom of the Less Wrong groupmind - what can we do to efficiently change the opinion of millions of people on important issues such as freindly AI? Is a site such as the one linked above going to have the intended effect, or is it going to fall upon rationally-deaf ears? What practical advice could we give to Michael and his contributors that would maximize the impact of the site? What other intervantions might be a better use of his time?
Edit: Thanks to those who made constructive suggestions for this post. It has been revised - R