I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.
The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.
But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.
Maybe a form of unit testing could be useful? Create a simple and not so simple test for a range of domains and get all AI's to run them periodically.
By default the narrow AI's would fail even the simple tests in other domains, but we would be able to monitor if / as it learns other domains.
Another test could be to see if its performance in its select field suddenly jumps up in effectiveness. To give a real world example, when Google (which is the closest thing we have to an AI right now, I think) gained the ability to suggest terms based on what one has already typed, it became much easier to search for things. Or when it will eventually gain the ability to parse human language, or so on.