Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.
The paper should probably
- Briefly mention some of the previous work about AI being near enough to be worth consideration (Kurzweil, maybe Bostrom's paper on the subject, etc.), but not dwell on it; this is a paper on the consequences of AI.
- Devote maybe little less than half of its actual content to the issue of FOOM, providing arguments and references for building the case of a hard takeoff.
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.- Build on the content of Eliezer's various posts, taking their primary arguments and making them stronger by reference to various peer-reviewed work.
- Include as authors everyone who made major contributions to it and wants to be mentioned; certainly make (again, assuming he doesn't object) Eliezer as the lead author, since this is his work we're seeking to convert into more accessible form.
I have created a wiki page for the draft version of the paper. Anyone's free to edit.
I'm pretty good at beating my computer at chess, even though I'm an awful player. I challenge it, and it runs out of time - apparently it can't tell that it's in a competition, or can't press the button on the clock.
This might sound like a facetious answer, but I'm serious. One way to defeat something that is stronger than you in a limited domain is to strive to shift the domain to one where you are strong. Operating with objects designed for humans (like physical chess boards and chess clocks) is a domain that current computers are very weak at.
There are other techniques too. Consider disease-fighting. The microbes that we fight are vastly more experienced (in number of generations evolved), and the number of different strategies that they try is vastly huge. How is it that we manage to (sometimes) defeat specific diseases? We strive to hamper the enemy's communication and learning capabilities with quarantine techniques, and steal or copy the nanotechnology (antibiotics) necessary to defeat it. These strategies might well be our best techniques against unFriendly manmade nanotechnological infections, if such broke out tomorrow.
Bruce Schneier beats people over the head with the notion DON'T DEFEND AGAINST MOVIE PLOTS! The "AI takes over the world" plot is influencing a lot of people's thinking. Unfriendly AGI, despite its potential power, may well have huge blind spots; mind design space is big!
A superintelligence can reasonably be expected to proactively track down its "blind spots" and eradicate them - unless it's "blind spots" are very carefully engineered.