Anna Salamon and I have finished a draft of "Intelligence Explosion: Evidence and Import", under peer review for The Singularity Hypothesis: A Scientific and Philosophical Assessment (forthcoming from Springer).
Your comments are most welcome.
Edit: As of 3/31/2012, the link above now points to a preprint.
Suppose agent A has goal G, and agent B has goal H (assumed to be incompatible). Put both agents in the same world. If you reliably end up with state G, then we say that A has greater optimization power.
I guess there's a hypothesis (though I don't know if this has been discussed much here) that this definition of optimization power is robust, i.e. you can assign each agent a score, and one agent will reliably win over another if the difference in score is great enough.
If the world is complex and uncertain then this will necessarily be "cross-domain" optimization power, because there will be enough novelty and variety in the sorts of tasks the agents will need to complete that they can't just have everything programmed in explicitly at the start.
So optimization power determines who ends up ruling the world - it's the thing that we really care about here.
But you can improve the optimization power of many kinds of agent just by adding some resource (such as money or computer hardware). This is relatively straightforward and doesn't constitute an innovation. But to improve the resource->optimization_power function, you do need innovation and this is what we're trying to capture by the word "intelligence".
(Just to make it clear, here I'm talking about innovation generating intelligence not intelligence generating innovations).
But we don't always expect optimization power to scale linearly with resources, so I think Robin Hanson may be closer to the mark with his "production function" model, than Yudkowsky with his "divide one thing by the other" model. If you give me so much money that I'm no longer getting much marginal value from it, you're not actually making me stupider.
Fitnesses are dependent on the environment, though. So: if agent A has goal GA, B has goal GB and C has goal CG, and A and B produce GA, B and C produce GB and C and A produce GC then you can't just assign scalar fitnesses to each agent and expect that to work. That could happen with circular predation, for example.
If you do want to assign scalar fitne... (read more)