Thus the society of mind remark - Minsky's thesis as I understand it is that the mind is a kludge of tailor-made components that perform nicely in their domain but are basically useless outside of it (which seems to me incompatible with the phenomenon of neuroplasticity).
In a complex ANN or a brain, you start with a really simple hierarchical prior over the network and a general purpose optimizer. After training you may get a 'kludge of tailor-made components' that perform really well on the domain you trained on. The result may be specific, but the process is very general.
A much more compelling performance would be the ability for a system to, say, read a textbook on topology and then pass an exam paper on the subject,
Yes, but that probably requires a large number of precursor capabilities AI systems do not yet possess.
I generally agree that a proper "agi-hard" metric will include a large battery of tests to get coverage over a wide range of abilities. We actually already have a good deal of experience on how to train AGIs and how to come up with good test metrics - in the field of education.
However you could view the various AI benchmarks in aggregation as an AGI test battery - each test measures only a narrow ability but combine enough of those tests and you have something more general. The recent development of textual QA benchmarks is another next step in that progression. Game environment tests such as Atari provide another orthogonal way to measure AGI progress.
Just to be clear: what I meant by "domain specific methods" in this context is auxiliary techniques that boost the performance of the general "component synthesis procedure" (such as an ANN) within a specific domain. It seems that if you want a truly general system, even one that works by producing hairy purpose specific components, then such auxiliary techniques cannot be used (unless synthesized by the agent itself). You can push this requirement to absurdity in practice, so I'm only stressing that it should be capable of tractably in...
A research team in China has created a system for answering verbal analogy questions of the type found on the GRE and IQ tests that scores a little above the average human score, perhaps corresponding to an IQ of around 105 or so. This improves substantially on the reported SOTA in AI for these types of problems.
This work builds on deep word-vector embeddings which have led to large gains in translation and many NLP tasks. One of their key improvements involves learning multiple vectors per word, where the number of specific word meanings is simply grabbed from a dictionary. This is important because verbal analogy questions often use more rare word meanings. They also employ modules specialized for the different types of questions.
I vaguely remember reading that AI systems already are fairly strong at solving visual raven-matrix style IQ questions, although I haven't looked into that in detail.
The multi-vector technique is probably the most important take away for future work.
Even if subsequent follow up work reaches superhuman verbal IQ in a few years, this of course doesn't immediately imply AGI. These types of IQ tests measure specific abilities which are correlated with general intelligence in humans, but these specific abilities are only a small subset of the systems/abilities required for general intelligence, and probably rely on a smallish subset of the brain's circuitry.