Error comments on A definition of wireheading - Less Wrong

35 Post author: Anja 27 November 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Error 14 December 2012 12:07:35AM 0 points [-]

Does this simplify to the AI obeying: "Modify my utility function if and only if the new version is likely to result in more utility according to the current version?"

If so, something about it feels wrong. For one thing, I'm not sure how an AI following such a rule would ever conclude it should change the function. If it can only make changes that result in maximizing the current function, why not just keep the current one and continue maximizing it?

Comment author: falenas108 14 December 2012 05:52:00AM -1 points [-]

That's the point, that it would almost never change it's underlying utility function. Once we have a provably friendly FAI, we wouldn't want it to change the part that makes its friendly.

Now, it could still change how it goes about achieving it's utility function, as long as that helps it get more utility, so it would still be self-modifying.

There is a chance that it could change (E.g. if you were naturally a 2-boxer on Newcomb's Problem, you might self-modify to do a one-boxer). But, those cases are rare.