You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?
To be clear, I do not doubt that superhuman artificial general intelligence is practically possible. I do not doubt that humans will be able to create it. What I am questioning is the FOOM part.
...people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?
Yeah, take for example this article by Eliezer. As far as I understand it, I agree with everything, except for the last paragraph:
...It might perhaps be more limited than this in mere practice, if it's just running on a lapto
Link: nextbigfuture.com/2011/05/mit-proves-that-simpler-systems-can.html
Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:
This reminds me of a post by Robin Hanson:
Link: Is The City-ularity Near?
Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?
Link: How far can AI jump?
(via Hard Takeoff Sources)