shokwave comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: shokwave 29 December 2010 03:08:28PM 1 point [-]

Hard-coding onto chips, or even making specific structures electromechanical in nature, is one way of how humans would achieve "explicitly forbidden to self-modify" in AIs. I estimated that one in every four AGI projects will desire to forbid their project from self-modification. I thought this was optimistic; I haven't seen any discussion of fixed AGI. Although maybe that might be something military research and development is interested in.

Comment author: JoshuaZ 29 December 2010 03:29:24PM 1 point [-]

My point was that even in some cases where people aren't thinking about self-modification, self-modification won't happen by default.