Today's post, Surprised by Brains was originally published on 23 November 2008. A summary (taken from the LW wiki):

 

If you hadn't ever seen brains before, but had only seen evolution, you might start making astounding predictions about their ability. You might, for instance, think that creatures with brains might someday be able to create complex machinery in only a millenium.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Billion Dollar Bots, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:36 AM

Big thank you to Hanson for helping illuminate what it is he thinks they're actually disagreeing about, in this comment:

Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?

Just a thought: given a particular state-of-the-art, does an AI's innovation rate scale superlinearly with its size? If it does, an AI could go something like foom even if it chose to trade away all of its innovations, as it would stay more productive than all of its smaller competitors and just keep on growing.

The analogy with firms would suggest it's not like this; the analogy with brains is less clear. Also I get the sense that this doesn't correctly describe Yudkowsky's foom (which is somehow more meta-level than that).

Actually, the relevant thing isn't whether it's superlinear but whether a large AI/firm is more innovative than a set of smaller ones with the same total size. I was assuming that the latter would be linear, but it's probably actually sublinear as you'd expect different AIs/firms to be redundantly researching the same thing.