I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.
The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.
But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.
you seem to be saying almost the same thing as in your other post.
The largest part of the threat from a general AI is the idea that it wouldn't just persue a goal, it would understand enough about the world to protect it's own persuit of that goal.
A paperclipper which litterally has no concept of things like gravity, minds, it's own hardware and existance or living beings and has no capacity to understand them nor drive to expand might follow instructions too literally but it's about as threatening as a roomba-dust-collecting-AI which figures out it can maximise dust picked up by dumping it's bag and re-collecting it.
A general AI is only a threat because it's a potential Super-Machiavellian genius which defends the first goals you program into it to stop you changing them.
We basically already have thin AIs. Just because something can translate a thousand languages doesn't mean it will suddenly learn to build nuclear weapons in order to take over the world in order to maximise pages translated per hour.
A thin AI is like a blind, obsessive golem idiot savant with severe autism.
Some people might get hurt but in the same way that a bulldozer with a brick on the accelerator might hurt people. it's a screwup, not a species ending event.
The question is how difficult it is to jump from the stupid AI to the general AI. Does it require hundred gradual improvements? Or could just one right improvement in the right situation jump across the whole abyss? Something like taking the "idiot savant golem with severe autism" who cares only about one specific goal, and replacing the goal with "understand everything, and apply this understanding to improving your own functionality"... and suddenly we have the fully general AI.
Remember that compartmentalization exists in human minds,... (read more)