Today's post, What Core Argument? was originally published on December 10, 2008. A summary:
The argument in favor of a strong foom just isn't well supported enough to suggest that such a dramatic process is likely.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Mechanics of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
What about manipulators? I havent, as far as I know, seen much analysis of manipulation capabilities (and counter-manipulation) on Less Wrong. Mostly there is the AI-box issue (a really freaking big deal, I agree) and then it seems to be considered here that the AI will quickly invent super-nanotech, will not be able to be impeded in its progress, and will become godlike very quickly. I've seen some arguments for this, but never a really good analysis, and it's the remaining reason I am a bit skeptical of the power of FOOM.
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-c... (read more)