Wei_Dai comments on The metaphor/myth of general intelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
The (100,5,5) AI seems kind of like a Hitler-AI, very good at manipulating people and taking power over human societies, but stupid about what to do once it takes over. We can imagine lots of narrow intelligences that are better at destruction than helping us reach a positive Singularity (or any kind of Singularity). We already know that FAI is harder than AGI, and if such narrow intelligences are easier than AGI, then we're even more screwed.
I want to point out that a purely narrow intelligence (without even a bare minimum amount of general intelligence, i.e., a Tool-AI), becomes this type of intelligence if you combine it with a human. This is why I don't think Tool-AIs are safe.
So I would summarize my current position as this: General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.
Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can't we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?
I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it's just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it's easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it's hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.
It's hard to imagine a narrow AI that can stop all competing AIs, but can't be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don't know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger "attack AIs". Not a very good situation unless we're just trying to buy a little bit of time until FAI or IA is developed.
Add a "may be" to the first sentence and I'm with you.