David3 comments on My Bayesian Enlightenment - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (57)
Those are good points, although you did add the assumption of a community of uncontrolled widespread AI's whereas my idea was related to building one for research as part of a specific venture (eg singinst)
In any case, I have the feeling that the problem of engineering a safe controlled environment for a specific human level AI is much smaller than the problem of attaining Friendliness for AIs _in general_ (including those that are 10x, 100x, 1000x etc more intelligent). Consider also that deciding not to build an AI does not stop everybody else from doing so, so if a human level AI were valuable in achieving FAI as I suggest, then it would be wise for the very reasons you suggest to take that route before the bad scenario plays out.