Steven, I'm a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:
We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be "friendly" because people are more or less friendly might be wrong.
Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.
This super-effective AI would have the ability (perhaps just as a side effect of its goal attainment) to wipe out humanity. Because of the bias in (1) we do not give sufficient credibility to this possibility when in fact it is the default scenario unless the AI is constructed very carefully to avoid it.
It might be possible to do that careful construction (that is, create a Friendly AI), if we work hard on achieving that task. It is not impossible.
The only arguments for the likelihood of imminence despite little to none apparent progress toward a machine capable of acting intelligently in the world and rapidly rewriting its own source code are:
A. a "loosely analogous historical surprise" -- the above-mentioned nuclear reaction analogy. B. the observation that breakthroughs do not occur on predictable timeframes, so it could happen tomorrow. C. we might already have sufficient prerequisites for the breakthrough to occur (computing power, programming productivity, etc)
I find these points to all be reasonable enough and imagine that most people would agree. The problem is going from this set of "mights" and suggestive analogies to a probability of imminence. You can't expect to get much traction for something that might happen someday, you have to link from possibility to likelihood. That people make this leap without saying how they got there is why observers refer to the believers as a sort of religious cult. Perhaps the case is made somewhere but I haven't seen it. I know that Yudkowsky and Hanson debated a closely related topic on Overcoming Bias at some length, but I found Eliezer's case to be completely unconvincing.
I just don't see it myself... "Seed AI" (as one example of a sort of scenario sketch) was written almost a decade ago and contains many different requirements. As far as I can see, none of them have had any meaningful progress in the meantime. If multiple or many breakthroughs are necessary, let's see one of them for starters. One might hypothesize that just one magic bullet brfeakthrough is necessary but that sounds more like a paranoid fantasy than a credible scientific hypothesis.
Now, I'm personally sympathetic to these ideas (check the SIAI donor page if you need proof), and if the lack of a case from possibility to likelihood leaves me cold, it shouldn't be surprising that society as a whole remains unconvinced.
Hanson's position was that something like a singularity will occur due to smarter than human Cognition, but he differs from eliezer by claiming that it will be a distributed intelligence analogous to the economy, trillions of smart human uploads and narrow AIs exchanging skills and subroutines.
He still ultimately supports the idea of a fast transition, based on historical transitions. I think robin would say that something midway between 2 weeks and 20 years is reasonable. Ultimately, if you think hanson has a stronger case, you're still talking about a fast transition to superintelligence that we need to think about very carefully.
Michael Annisimov has put up a website called Terminator Salvation: Preventing Skynet, which will host a series of essays on the topic of human-friendly artificial intelligence. Three rather good essays are already up there, including an old classic by Eliezer. The association with a piece of fiction is probably unhelpful, but the publicity surrounding the new terminator film is probably worth it.
What rational strategies can we employ to maximize the impact of such a site, or of publicity for serious issues in general? Most people who read this site will probably not do anything about it, or will find some reason to not take the content of these essays seriously. I say this because I have personally spoken to a lot of clever people about the creation of human-friendly artificial intelligence, and almost everyone finds some reason to not do anything about the problem, even if that reason is "oh, ok, that's interesting. Anyway, about my new car... ".
What is the reason underlying people's indifference to these issues? My personal suspicion is that most people make decisions in their lives by following what everyone else does, rather than by performing a genuine rational analysis.
Consider the rise in social acceptability of making small personal sacrifices and political decisions based on eco-friendliness and your carbon footprint. Many people I know have become very enthusiastic for recycling used food containers and for unplugging appliances that use trivial amounts of power (for example unused phone chargers and electrical equipment on standby). The real reason that people do these things is that they have become socially accepted factoids. Most people in this world, even in this country, lack the mental faculties and knowledge to understand and act upon an argument involving notions of per capita CO2 emissions; instead they respond, at least in my understanding, to the general climate of acceptable opinion, and to opinion formers such as the BBC news website, which has a whole section for "science and environment". Now, I don't want to single out environmentalism as the only issue where people form their opinions based upon what is socially acceptable to believe, or to claim that reducing our greenhouse gas emissions is not a worthy cause.
Another great example of socially acceptable factoids (though probably a less serious one) is the detox industry - see, for example, this Times article. I quote:
Anyone who takes a serious interest in changing the world would do well to understand the process whereby public opinion as a whole changes on some subject, and attempt to influence that process in an optimal way. How strongly is public opinion correlated with scientific opinion, for example? Particular attention should be paid to the history of the environmentalist movement. See, for example, McKay's Sustainable energy without the hot air for a great example of a rigorous quantitative analysis in support of various ways of balancing our energy supply and demand, and for a great take on the power of socially accepted factoids, see Phone chargers - the Truth.
So I submit to the wisdom of the Less Wrong groupmind - what can we do to efficiently change the opinion of millions of people on important issues such as freindly AI? Is a site such as the one linked above going to have the intended effect, or is it going to fall upon rationally-deaf ears? What practical advice could we give to Michael and his contributors that would maximize the impact of the site? What other intervantions might be a better use of his time?
Edit: Thanks to those who made constructive suggestions for this post. It has been revised - R