Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.
The paper should probably
- Briefly mention some of the previous work about AI being near enough to be worth consideration (Kurzweil, maybe Bostrom's paper on the subject, etc.), but not dwell on it; this is a paper on the consequences of AI.
- Devote maybe little less than half of its actual content to the issue of FOOM, providing arguments and references for building the case of a hard takeoff.
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.- Build on the content of Eliezer's various posts, taking their primary arguments and making them stronger by reference to various peer-reviewed work.
- Include as authors everyone who made major contributions to it and wants to be mentioned; certainly make (again, assuming he doesn't object) Eliezer as the lead author, since this is his work we're seeking to convert into more accessible form.
I have created a wiki page for the draft version of the paper. Anyone's free to edit.
I think: "2) a point in time when prediction is no longer possible (a.k.a., "Predictive Horizon")" ...is equally nonsensical. Eliezer seems to agree:
"The Predictive Horizon never made much sense to me"
...and so does Nick, quoted later in the essay:
"I think it is unfortunate that some people have made Unpredictability a defining feature of "the singularity". It really does tend to create a mental block."
Robin Hanson thinks that the unpredictability idea is silly as well.
Yet aren't these two the main justifications for using the "singularity" term in the first place?
If the rate of progress is not about to shoot off to infinity, and there isn't going to be an event-horizon-like threshold at some future point in time, it seems to me that that's two of the major justifications for using the "singularity" term down the toilet.
To me - following the agricultural/industrial terminology - it looks as though there will be an intelligence revolution - and then probably a molecular nanotechnology/robotics revolution not long after.
Squishing those two concepts together into "singularity" paste offends my sense of the naming historical events. I think it is confusing, misleading, and pseudo-scientific.
Please quit with the ridiculous singularity terminology!
http://alife.co.uk/essays/the_singularity_is_nonsense/