You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on The metaphor/myth of general intelligence - Less Wrong Discussion

11 Post author: Stuart_Armstrong 18 August 2014 04:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 20 August 2014 06:05:40PM *  3 points [-]

I meant general intelligence in an abstract sense, not necessarily AGI. So intelligence amplification would be covered unless it's just amplifying a narrow area, for example making someone very good at manipulating people without increasing his or her philosophical and other abilities. (This makes me realize that IA can be quite dangerous if it's easier to amplify narrow domains.) I think uploading is just an intermediary step to either intelligence amplification or FAI, since it's hard to see how trillions of unenhanced human uploads all running very fast would be desirable or safe in the long run.

For example, if we can have a narrow AI that kills all humans, why can't we have a narrow AI that stops all competing AIs?

It's hard to imagine a narrow AI that can stop all competing AIs, but can't be used in other dangerous ways, like as a generic cyberweapon that destroys all the technological infrastructure of another country or the whole world. I don't know how the group that first develops such an AI would be able to keep it out of the hands of governments or terrorists. Once such an AI comes into existence, I think there will be a big arms race to develop countermeasures and even stronger "attack AIs". Not a very good situation unless we're just trying to buy a little bit of time until FAI or IA is developed.