Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AlexMennen comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 26 May 2017 05:43:10PM 4 points [-]

Good point. This seems like an important oversight on my part, so I added a note about it.

Comment author: Yosarian2 26 May 2017 09:26:22PM 3 points [-]


One more point you might want to mention, is that in a world with AI but no intelligent explosion, where AI's are not able to rapidly develop better AI's, augmented human intelligence through various transhuman technologies and various forms of brain/computer interfaces could be a much more important factor; that kind of technology could allow humans to "keep up with" AI's (at least for a time), and it's possible that humans and AI's working together on tasks could remain competitive with pure AI's for a significant time period.