ThrustVectoring comments on Why AI may not foom - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
Not all of the human thought process goes on inside the head. An engineer with a computer is far more productive in terms of designs generated than one with a pad of paper (and in turn more productive than one without any tools whatsoever).
We've merely gotten all of the obvious low-hanging recursive improvements. From exporting calculations out of our heads (abacus, paper and pencil, slide rule, computer) to better organizational systems, we've improved our ability to turn our thoughts into useful work.
If we find another big improvement, it will seem obvious in retrospect.
You are right, and it's interesting to consider this quote from the article in that light:
What would a group of human AI researchers capable of completely reimplementing a copy of themselves be able to do? I'm assuming for example's sake that if an AGI could do it, so could the human researchers it is on par with. That's actually a tremendous amount of power for either an AGI or a group of humans. As it is today we've been lucky to discover modern medicine and farming techniques and find fossil fuels just to boost the total population and scavenge the tiny percentage of scientists and engineers off of it. We won't be able to double the number of high-quality AI researchers every 50 years for long on this rock without an actual improvement in the rate of growth of AI research. The point where any system acquires the ability to be self-sustaining seems like it would have to be an inflection point of greatly increased growth.