Wei_Dai comments on The metaphor/myth of general intelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
This may be a logical consequence of "a minimum of understanding and planning go a long way". As evolution slowly increases the intelligence of some species, at some point a threshold is crossed and technological explosion happens. If "a minimum of understanding and planning go a long way" then this happens pretty early, when that species can still be considered poor general intelligences on an absolute scale. This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
The added part is interesting. I'll try to respond separately.
That needs a somewhat stronger result, "a minimum increment of understanding and planning go a long way further". And that's partially what I'm wondering about here.
The example of humans up to von Neumann shows there's not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann's level, and if that's true I think there would have to be some explanation for it.
Humans are known to have correlations between their different types of intelligence (the supposed "g"). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.