You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Drive-less AIs and experimentation - Less Wrong Discussion

4 Post author: whpearson 17 June 2011 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 17 June 2011 03:04:09PM *  4 points [-]

I asked a similar question sometime ago. The strongest counterargument offered was that a scope-limited AI doesn't stop rogue unfriendly AIs from arising and destroying the world.

Comment author: XiXiDu 17 June 2011 04:28:51PM *  4 points [-]

The strongest counterargument offered was that a scope-limited AI doesn't stop rogue unfriendly AIs from arising and destroying the world.

Maybe I misinterpreted the argument. If it means that we need an unbounded friendly AI to deal with unbounded unfriendly AI, it makes more sense. The question then comes down to how likely it is that once someone discovered AGI, others will be able to discover it as well or make use of the discovery, versus the payoff from experimenting with bounded versions of such an AGI design before running an unbounded friendly version. In other words, how much can we increase our confidence that we solved friendliness by experimenting with bounded versions, versus the risk associated with not taking over the world as soon as possible to impede unfriendly unbounded versions.