An extensibility argument for greater-than-human intelligence argues that once we get to a human level AGI, extensibility would make an AGI of greater-than-human-intelligence feasible. It is identified by David Chalmers as one of the main premises for the singularity and intelligence explosion hypothesis 1. One intuitive ground for this argument is that information technologies have always presented a continuous development towards greater computational capacity. Chalmers presents the argument as below:
—————-
He says premises I and II follow directly from most definitions of ‘extendible method’: a method that can easily be improved, yielding more intelligent systems. One possible extendible method would be programming an AGI, since all known software seems improvable. One known non-extendible method is biological reproduction; it produces human level intelligence and nothing more. Also, there could be methods of achieving greater than human intelligence without creating human level AGI, for example through Biological Cognitive Enhancement or genetic engineering. If the resulting greater than human intelligence is also extendible and so would the next levels of intelligence, then an intelligence explosion would follow. It could be argued that we are at a ceiling or for intelligence that is very hard to surpass, but there seems to have little to no basis for this.
Luke Muehlhauser and Anna Salamon 3 list several features of an artificial human level intelligence that suggest it would be easily extendible: increased computational resources, increased communication speed, increased serial depth, duplicability, editability, goal coordination and improved rationality. They also agree with Omohundro 456and Bostrom7 that most of advanced intelligence would have the instrumental goal of increasing its own intelligence since this would help achieving almost any other goal.