Today's post, What Core Argument? was originally published on December 10, 2008. A summary:
The argument in favor of a strong foom just isn't well supported enough to suggest that such a dramatic process is likely.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Mechanics of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it'd go in the longer run, but that might take time -- I don't know, I don't know enough about the subject. But even without strong Drexlerian nanotechnology, it's still possible to get an awful lot done.
That much I do totally agree.