The process that is responsible for Moore's law involves human engineers, but it also involves human culture, machines and software.
So are you arguing for some superexponential growth from cultural evolution changing this process, or what? It's completely unclear why this matters.
The position I'm arguing against is:
if our old extrapolation was for Moore’s Law to follow such-and-such curve given human engineers, then faster engineers should break upward from that extrapolation.
This treats human engineers as a fixed quantity. However the process that actually produces Moore's law involves human engineers, human culture, machines and software. Only the former are relatively unchanging. Culture, machines and software are all improving dramatically as time passes - and they are absolutely the reason why Moore's law can keep up the pace. Yudkowsky has a long history of not properly understanding this process - and it hinders his analysis.
The human engineer's DNA may have stayed unchanged over the last century, but their cultural software has improved dramatically over that same period - resulting in the Flynn effect.
That may be your explanation for the Flynn effect, but I think it's safer to remain on the fence. There are too many other possible causal mechanisms at play to blame it on cultural evolution.
All of the proposed explanations of the Flynn effect can be expressed in terms of cultural evolution - except perhaps for for Heterosis, which is rather obviously incapable of explaining the observed effect.
Only by considering how this phenomenon is rooted in the present day, can it be properly understood.
Show me a modification to one of the basic models that follows from this statement and changes the consequence of the argument.
That seems like a vague and expensive-sounding order. How would seeing "a modification to one of the basic models that follows from this statement and changes the consequence of the argument" add to the discussion?
This treats human engineers as a fixed quantity. However the process that actually produces Moore's law involves human engineers, human culture, machines and software. Only the former are relatively unchanging. Culture, machines and software are all improving dramatically as time passes - and they are absolutely the reason why Moore's law can keep up the pace.
So then Moore's law should be faster than Yudkowsky's analysis predicts, because of cultural evolution? I still have no idea what you're trying to argue.
...Yudkowsky has a long history of not proper
Summary: Intelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, "reinvestable returns on cognitive investments": what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.
(Sample idea you haven't heard before: The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)
I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive 'technological acceleration'. MIRI has an intended-to-be-small-and-technical mailing list for such discussion. In case it's not clear from context, I (Yudkowsky) am the author of the paper.
Abstract:
The dedicated mailing list will be small and restricted to technical discussants.
This topic was originally intended to be a sequence in Open Problems in Friendly AI, but further work produced something compacted beyond where it could be easily broken up into subposts.
Outline of contents:
1: Introduces the basic questions and the key quantitative issue of sustained reinvestable returns on cognitive investments.
2: Discusses the basic language for talking about the intelligence explosion, and argues that we should pursue this project by looking for underlying microfoundations, not by pursuing analogies to allegedly similar historical events.
3: Goes into detail on what I see as the main arguments for a fast intelligence explosion, constituting the bulk of the paper with the following subsections:
4: A tentative methodology for formalizing theories of the intelligence explosion - a project of formalizing possible microfoundations and explicitly stating their alleged relation to historical experience, such that some possibilities can allegedly be falsified.
5: Which open sub-questions seem both high-value and possibly answerable.
6: Formally poses the Open Problem and mentions what it would take for MIRI itself to directly fund further work in this field.