Strange7 comments on Fake Utility Functions - Less Wrong

22 Post author: Eliezer_Yudkowsky 06 December 2007 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Sam_Bhagwat 07 December 2007 04:19:54AM 0 points [-]

"Bootstrap the FAI by first building a neutral obedient AI(OAI) that is constrained in such a way that it doesn't act besides giving answers to questions."

As long as we be sure not to feed it too hard questions, specifically, questions that it is hard to answer a priori without actually doing something. (eg, an AI that tried to plan the economy would likely find it impossible to define and thus solve the relevant equations without being able to adjust some parameters)

Comment author: Strange7 31 August 2012 04:37:26AM 1 point [-]

If you have to avoid asking the AI too hard a question to stop it from taking over the world, you've already done something wrong. The important part is to design the oracle AI such that it is capable of admitting that it can't figure out an adequate answer with it's currently available resources, and then moving on to the next question.