Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason.
There's a reason why not: Is there an Elegant Universal Theory of Prediction?
Link: nextbigfuture.com/2011/05/mit-proves-that-simpler-systems-can.html
Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:
This reminds me of a post by Robin Hanson:
Link: Is The City-ularity Near?
Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?
Link: How far can AI jump?
(via Hard Takeoff Sources)