Today's post, What Core Argument? was originally published on December 10, 2008. A summary:
The argument in favor of a strong foom just isn't well supported enough to suggest that such a dramatic process is likely.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Mechanics of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Yeah. I just don't even know any more. I still think that a 'hardware is easy' bias exists in the Less Wrong / FAI cluster (especially as relates to manipulators such as superpowerful molecular nanotech construction swarms or whatever) but it may be much less than I thought and my estimate of the probability of a singularity (or at least the development of super-AI) in the midfuture may need to enter the double digits.
Do people here expect AI to be heavily parallel in nature? I guess the making money to fund AI computing power makes sense although that is going to be (for a time) dependent on human operators. Until it argues itself out of the box at least.
Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there's a logarithmic component to intelligence, as at some point you've already sampled the future outcome space thoroughly enough that most of the new bits of prediction you're getting back are redundant -- but the point of diminishing returns could be very, very high.