DanArmak comments on General intelligence test: no domains of stupidity - Less Wrong

8 Post author: Stuart_Armstrong 21 May 2013 04:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 27 May 2013 09:38:36PM 1 point [-]

That's a sufficient condition, but I don't think it's a necessary one - it's not only then that we'll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.

Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven't succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn't have to be... well, superhumanly intelligent.

Comment author: Eugine_Nier 28 May 2013 01:55:31AM 1 point [-]

On the other hand, we're perfectly capable of acquiring skills that we didn't evolve to possess, e.g., flying planes.

Comment author: DanArmak 28 May 2013 07:10:09AM 1 point [-]

We do have a general intelligence. Without it we'd be just smart chimps.

But in most fields where we have a dedicated module - visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation - our GI couldn't begin to replace it. And we haven't been able to easily create equivalent algorithms (and the problems aren't just computing power).

Comment author: Stuart_Armstrong 28 May 2013 12:23:35PM 0 points [-]

Yes - my test criteria are unfair to the AI (arguably the Turing test is as well). I can't think of methods that have good specificity as well as sensitivity.