Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

whpearson comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: whpearson 25 May 2017 06:12:36PM *  1 point [-]

One possibility would for the malign intelligence to take over the world would be to orchestrate a nuclear war and be sufficiently hardened/advanced that it could survive and develop more quickly in the aftermath.

I personally don't think writing down a goal gives us any predictability without a lot of work, which may or may not be possible. Specifying a goal assumes that the AIs perceptual/classification systems chops the world in the same way we would (which we don't have a formal specification of, and changes over time). We would also need to solve the ontology identification problem.

I'm of the opinion that intelligence might need to be self-programming on a micro subconscious level, which might make self-improvement hard on a macro level. So i think we should plan for non-fooming scenarios.