David3 comments on My Bayesian Enlightenment - Less Wrong

25 Post author: Eliezer_Yudkowsky 05 October 2008 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

Sort By: Old

You are viewing a single comment's thread.

Comment author: David3 08 October 2008 06:45:53PM 0 points [-]

Those are good points, although you did add the assumption of a community of uncontrolled widespread AI's whereas my idea was related to building one for research as part of a specific venture (eg singinst)

In any case, I have the feeling that the problem of engineering a safe controlled environment for a specific human level AI is much smaller than the problem of attaining Friendliness for AIs _in general_ (including those that are 10x, 100x, 1000x etc more intelligent). Consider also that deciding not to build an AI does not stop everybody else from doing so, so if a human level AI were valuable in achieving FAI as I suggest, then it would be wise for the very reasons you suggest to take that route before the bad scenario plays out.