New Comment
4 comments, sorted by Click to highlight new comments since:

I don't find this convincing.

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee.

I think the same argument has been made by Hanson, and it doesn't seem to be true. Humans seem significantly superior based on the fact that they are capable of learning language. There is afaik no recorded instance of a chimpanzee doing that. The quote accurately points out that there are lots of things which an individual human or a tribe can't do that a chimpanzee can't do either, but it ignores the fact that there are also things which a human can in fact do and a chimpanzee can't. Moreover, even if it was true that a human brain isn't that much more awesome than a chimpanzee's, that doesn't imply that an AI can't be much more awesome than a human brain.

The remainder of the article argues that human capability is really based on a lot of implicit skills that aren't written down anywhere. I don't think this argument holds. If an AI is capable of reading much more quickly than humans, then it should also be able of watching video footage much more quickly than humans (if not by the same factor), and if it has access to the Internet, then I don't see why it shouldn't be able to learn how to turn the right knobs and handles on an oil rig and how to read the faces of humans – or literally anything else.

Am I missing something here?

It occurred to me too that a strong human advantage is the verbal world.

The "turning knobs on an oil rig" analogy is particularly unconvincing. Even a smart human can read the engineering schematics and infer what the knobs do without needing to be shown.

I can potentially see an argument about some mechanism that is more likely to be jury rigged off-spec in the field. Or a mechanism that is currently partially malfunctioning.

The best argument around implicit knowledge would be things like "Pure math research". While it is easy enough to get the axioms of maths, it is harder to see how people search the space from videos etc. My best model for how this is transferred between people is that people learn from other people by attempting to do maths and then getting given feedback based upon their methodology and what their teachers think needs to change. So the teacher needs to be able to model the student somewhat so that they can give useful feedback and correct errors. If this is necessarily the case, then computers will need a lot of personal human input to get good at abstract reasoning.

I don't think this is the necessarily the strongest argument.