Today's post, What Core Argument? was originally published on December 10, 2008. A summary:
The argument in favor of a strong foom just isn't well supported enough to suggest that such a dramatic process is likely.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Mechanics of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
What is the point of this argument? Is it the time-scale of the singularity, or the need for friendliness in AI? I was under the impression that it was the latter, but we've drifted severely afield of this matter. Robin addresses one of the less pivotal elements of Eliezer's claims - 1 week for 20 orders of magnitude, as opposed to the need for friendliness in AI. If it took 2 years to do 3 orders of magnitude, would we be any more effectively able to resist? The only difference is that this AI would have to play a little closer to its vest in the early stages.
Seriously, does Robin think that we'd be OK if an AI emerged that was equivalent of an IQ 250 human but completely tireless and without distractions, could be copied/distributed, and could cooperate perfectly because they all had the same utility function and they knew it, so they're essentially one AI... and it wasn't friendly...
We'd be in a lot of trouble, even without any sort of intelligence explosion at all.
I take it Robin would reply that this would indeed be quite bad, but not so bad nor so likely that we shouldn't pursue AI research fairly aggressively, given that AI research can lead to (for example) medical breakthroughs that can save or improve many lives, etc.
Or at any rate, Robin's point seems to be that the arguments that AI emergence would be so likely to be bad weren't very good in 2008 (I don't know if these arguments have been improved in the mean time).