Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

tukabel comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: tukabel 27 May 2017 08:49:45PM 0 points [-]

sure, "dumb" AI helping humanimals to amplify the detrimental consequences of their DeepAnimalistic brain reward functions is actually THE risk for the normal evolutionary step, called Singularity (in the Grand Theatre of the Evolution of Intelligence the only purpose of our humanimal stage is to create our successor before reaching the inevitable stage of self-destruction with possible planet-wide consequences)