You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on AGI Quotes - Less Wrong Discussion

6 Post author: lukeprog 02 November 2011 08:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread.

Comment author: lukeprog 29 January 2014 06:02:06PM 0 points [-]

Some philosophers and scholars who study and speculate on the [intelligence explosion]... maintain that this question is simply a matter of ensuring that AI is created with pro-human tendencies. If, however, we are creating an entity with greater than human intelligence that is capable of designing its own newer, better successors, why should we assume that human-friendly programming traits will not eventually fall by the wayside?

Al-Rodhan (2011), pp. 242-243, notices the stable self-modification problem.