DanArmak comments on General intelligence test: no domains of stupidity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
That's a sufficient condition, but I don't think it's a necessary one - it's not only then that we'll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.
Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven't succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn't have to be... well, superhumanly intelligent.
On the other hand, we're perfectly capable of acquiring skills that we didn't evolve to possess, e.g., flying planes.
We do have a general intelligence. Without it we'd be just smart chimps.
But in most fields where we have a dedicated module - visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation - our GI couldn't begin to replace it. And we haven't been able to easily create equivalent algorithms (and the problems aren't just computing power).
Yes - my test criteria are unfair to the AI (arguably the Turing test is as well). I can't think of methods that have good specificity as well as sensitivity.