DanArmak comments on How can we ensure that a Friendly AI team will be sane enough? - Less Wrong

10 Post author: Wei_Dai 16 May 2012 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 19 May 2012 04:52:12PM 0 points [-]

I thought you were merely specifying that the FAI theory was proven to be Friendly. But you're also specifying that any AGI not implementing a proven FAI theory, is formally proven to be definitely disastrous. I didn't understand that was what you were suggesting.

Even then there remains a (slightly different) problem. An AGI may Friendly to someone (presumably its builders) at the expense of someone else. We have no reason to think any outcome an AGI might implement would truly satisfy everyone (see other threads on CEV). So there will still be a rush for the first-mover advantage. The future will belong to the team that gets funding a week before everyone else. These conditions increase the probability that the team that makes it will have made a mistake, a bug, cut some corners unintentionally, etc.