You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Giles comments on Desired articles on AI risk? - Less Wrong Discussion

13 Post author: lukeprog 02 November 2012 05:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: Giles 02 November 2012 03:19:12PM *  12 points [-]

"Why If Your AGI Doesn't Take Over The World, Somebody Else's Soon Will"

i.e. however good your safeguards are, it doesn't help if:

  • another team can take your source code and remove safeguards (and why they might have incentives to do so)
  • Multiple discovery means that your AGI invention will soon be followed by 10 independent ones, at least one of which will lack necessary safeguards

EDIT: "safeguard" here means any design feature put in to prevent the AGI obtaining singleton status.