But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
Edit: Q&A is now closed. Thanks to everyone for participating, and thanks very much to Harpending and Cochran for their responses.
In response to Kaj's review, Henry Harpending and Gregory Cochran, the authors of the The 10,000 Year Explosion, have agreed to a Q&A session with the Less Wrong community.
If you have any questions for either Harpending or Cochran, please reply to this post with a question addressed to one or both of them. Material for questions might be derived from their blog for the book which includes stories about hunting animals in Africa with an eye towards evolutionary implications (which rose to Jennifer's attention based on Steve Sailer's prior attention).
Please do not kibitz in this Q&A... instead go to the kibitzing area to talk about the Q&A session itself. Eventually, this post will be edited to note that the process has been closed, at which time there should be no new questions.