Giles comments on Desired articles on AI risk? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
"Why If Your AGI Doesn't Take Over The World, Somebody Else's Soon Will"
i.e. however good your safeguards are, it doesn't help if:
EDIT: "safeguard" here means any design feature put in to prevent the AGI obtaining singleton status.