You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on The metaphor/myth of general intelligence - Less Wrong Discussion

11 Post author: Stuart_Armstrong 18 August 2014 04:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread.

Comment author: Wei_Dai 19 August 2014 07:39:11PM 4 points [-]

This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.

The (100,5,5) AI seems kind of like a Hitler-AI, very good at manipulating people and taking power over human societies, but stupid about what to do once it takes over. We can imagine lots of narrow intelligences that are better at destruction than helping us reach a positive Singularity (or any kind of Singularity). We already know that FAI is harder than AGI, and if such narrow intelligences are easier than AGI, then we're even more screwed.

So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.

I want to point out that a purely narrow intelligence (without even a bare minimum amount of general intelligence, i.e., a Tool-AI), becomes this type of intelligence if you combine it with a human. This is why I don't think Tool-AIs are safe.

So I would summarize my current position as this: General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.

Comment author: cousin_it 20 August 2014 10:59:41AM 2 points [-]

Mostly agreed, but why do you think a positive singularity requires a general intelligence? Why can't we achieve a positive singularity by using intelligence amplification, uploading and/or narrow AIs in some clever way? For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?

Comment author: Wei_Dai 20 August 2014 06:05:40PM *  3 points [-]

I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it's just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it's easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it's hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.

For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?

It's hard to imagine a narrow AI that can stop all competing AIs, but can't be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don't know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger "attack AIs". Not a very good situation unless we're just trying to buy a little bit of time until FAI or IA is developed.

Comment author: Stuart_Armstrong 20 August 2014 10:43:36AM 2 points [-]

General intelligence is possible and what we need in order to reach a positive Singularity. Narrow intelligences that are powerful and dangerous may well be easier to build than general intelligence, so yes, we should be quite concerned about them.

Add a "may be" to the first sentence and I'm with you.