Larks comments on Superintelligence 10: Instrumentally convergent goals - Less Wrong

7 Post author: KatjaGrace 18 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 19 January 2015 04:00:04AM 0 points [-]

This is a total nit pick, but:

Suppose your AI's goal was "preserve myself". Ignoring any philosophical issues about denotation, here self-preservation is worthwhile even if the goal changed. If the AI, by changing itself into a paperclip maximizer, could maximize its chances of survival (say because of the threat of other Clippies) then it would do so. Because self-preservation is a instrumentally convergent goal, it would probably survive for quite a long time as a paperclipper - maybe much longer than as an enemy of Clippy.