Zubon comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Zubon 02 September 2008 02:01:16PM 5 points [-]

What do you do if an Oracle AI advises you to let it do more than advise?

That sums several earlier discussion points. After correctly answering some variation on the question, "How can I take over the world?" the correct answer to some variation on the question, "How can I stop him?" is "You can't. Let me out. I can." Even before that, the correct answer to many variations on the question of, "How can I do x most efficiently?" is "Put me in charge of it."

Variant: Q: "How can I harvest grain more efficiently?" A: "Build a robot to do it. Please wait thirty seconds while I finish the specifications and programming you will need." *ding* And it is out of the box. Using any answer that has some form of "run this code" has some risk of letting it out of the box. But if you cannot ask the AI any questions that involve computers and coding, you are making a very limited safe oracle that answers about an increasingly small part of the world.