JGWeissman comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 05 July 2012 04:52:35PM 0 points [-]

If the tool is not sufficiently reflective to recommend improvements to itself, it will never become a worthy substituted for FAI. This case is not interesting.

If the tool is sufficiently reflective to recommend improvements to itself, it will recommend that it be modified to just implement its proposed policies instead of printing them. So we would not actually implement that policy. But what then makes it recommend a policy that we will actually want to implement? What tweak to the program should we apply in that situation?

Comment author: Nebu 17 February 2016 11:28:40AM 0 points [-]

But what then makes it recommend a policy that we will actually want to implement?

First of all, I'm assuming that we're taking as axiomatic that the tool "wants" to improve itself (or else why would it have even bothered to consider recommending that it be modified to improve itself?); i.e. improving itself is favorable according to its utility function.

Then: It will recommend a policy that we will actually want to implement, because its model of the universe includes our minds and it can see that if it recommends a policy we will actually want to implement leads it to a higher ranked state in its utility function.

Comment author: hairyfigment 12 July 2012 04:57:10AM *  -1 points [-]

If the tool is sufficiently reflective to recommend improvements to itself, it will recommend that it be modified to just implement its proposed policies instead of printing them.

Perhaps. I noticed a related problem: someone will want to create a self-modifying AI. Let's say we ask the Oracle AI about this plan. At present (as I understand it) we have no mathematical way to predict the effects of self-modification. (Hence Eliezer's desire for a new decision theory that can do this.) So how did we give our non-self-modifying Oracle that ability? Wouldn't we need to know the math of getting the right answer in order to write a program that gets the right answer? And if it can't answer the question:

  • What will it even do at that point?
  • If it happens to fail safely, will humans as we know them interpret this non-answer to mean we should delay our plan for self-modifying AI?