Comments:
Given human researchers of constant speed, computing speeds double every 18 months.
Human researchers, using top-of-the-line computers as assistants. I get the impression this matters more for chip design than litho-tool design, but it definitely helps for those too.
Humans have around four times the brain volume of chimpanzees, but the difference between us is probably mostly software algorithms.
Is 'software algorithms' the right phrase? I'd characterize the improvements more as firmware or hardware improvements. [edit] Later you use the phrase "cognitive algorithms," which I'm much happier with.
A more concrete example you can use to replace the handwaving: one of the big programming productivity boosters is a second monitor, which seems directly related to low human working memory. It's easy to imagine minds with superior working memory able to handle much more complicated models and tasks. (We indeed seem to see this diversity among humans.)
In particular, your later arguments on serial causal depth seem like they would benefit from explicitly considering working memory as well as speed.
Any lab that shuts down overnight so its researchers can sleep must be limited by serial cause and effect in researcher brains more than serial cause and effect in instruments- researchers who could work without sleep would correspondingly speed up the lab.
I don't know about you, but I do research in my sleep, and my lab never shuts off our computers because we often have optimization processes running overnight (on every computer in the lab).
It is the case that most of the cycle time in research is mostly due to the human researchers rather than the computer speed (each month on average there might be about a week that's code-limited rather than human-limited), but this example as you present it is unconvincing.
Given human researchers of constant speed, computing speeds double every 18 months.
Human researchers, using top-of-the-line computers as assistants.
Indeed. For me, that was the most glaring conceptual problem. That and attempting to predict the course of evolution with minimal reference to evolutionary theory. There is a literature on how cultural systems evolve. For a specific instance see this:
...The third tipping point was the appearance of technology capable of accumulating and manipulating vast amounts of information outside humans, thus remo
Summary: Intelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, "reinvestable returns on cognitive investments": what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.
(Sample idea you haven't heard before: The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)
I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive 'technological acceleration'. MIRI has an intended-to-be-small-and-technical mailing list for such discussion. In case it's not clear from context, I (Yudkowsky) am the author of the paper.
Abstract:
The dedicated mailing list will be small and restricted to technical discussants.
This topic was originally intended to be a sequence in Open Problems in Friendly AI, but further work produced something compacted beyond where it could be easily broken up into subposts.
Outline of contents:
1: Introduces the basic questions and the key quantitative issue of sustained reinvestable returns on cognitive investments.
2: Discusses the basic language for talking about the intelligence explosion, and argues that we should pursue this project by looking for underlying microfoundations, not by pursuing analogies to allegedly similar historical events.
3: Goes into detail on what I see as the main arguments for a fast intelligence explosion, constituting the bulk of the paper with the following subsections:
4: A tentative methodology for formalizing theories of the intelligence explosion - a project of formalizing possible microfoundations and explicitly stating their alleged relation to historical experience, such that some possibilities can allegedly be falsified.
5: Which open sub-questions seem both high-value and possibly answerable.
6: Formally poses the Open Problem and mentions what it would take for MIRI itself to directly fund further work in this field.