Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Yosarian2 comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: Yosarian2 26 May 2017 10:01:10AM 3 points [-]

Another big difference is if there's no intelligence explosion, we're probably not talking about a singleton. If someone manages to create an AI that's, say, roughly human level intelligence (probably stronger in some areas and weaker in others, but human-ish on average) and progress slows or stalls after that, then the most likely scenario is that a lot of those human-level AI's would be created and sold for different purposes all over the world. We would probably be dealing with a complex world that has a lot of different AI's and humans interacting with each other. That could create it's own risks, but they would probably have to be handled in a different way.

Comment author: AlexMennen 26 May 2017 05:43:10PM 4 points [-]

Good point. This seems like an important oversight on my part, so I added a note about it.

Comment author: Yosarian2 26 May 2017 09:26:22PM 3 points [-]

Thanks.

One more point you might want to mention, is that in a world with AI but no intelligent explosion, where AI's are not able to rapidly develop better AI's, augmented human intelligence through various transhuman technologies and various forms of brain/computer interfaces could be a much more important factor; that kind of technology could allow humans to "keep up with" AI's (at least for a time), and it's possible that humans and AI's working together on tasks could remain competitive with pure AI's for a significant time period.