whpearson comments on Q&A with Harpending and Cochran - Less Wrong

26 Post author: MBlume 10 May 2010 11:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 14 May 2010 05:34:51PM *  2 points [-]

But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.

The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.