Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: turchin 25 May 2017 06:08:58PM *  3 points [-]

There could be many ways how an AI will produce human extinction without undergoing intelligent explosion. Even a relatively simple computer program, which helps biohacker to engineer new deadly biological viruses in droves could kill everybody.

I tried to list different ways how AI could kill humanity here:

http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/

and now working in transforming this map into a proper article. The draft is ready.