Tim_Tyler comments on Invisible Frameworks - Less Wrong

12 Post author: Eliezer_Yudkowsky 22 August 2008 03:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 24 August 2008 08:30:44PM 0 points [-]

Re: AIs not "wanting" to change their goals

Humans can and do change their goals - e.g. religious conversions.

However, I expect to see less of that in more advanced agents.

If we build an AI to perform some task, we will want it to do what we tell it - not decide to go off and do something else.

An AI that forgets what it was built to do is normally broken. We could build such systems - but why would we want to?

As Omohundro says: expected utility maximisers can be expected to back-up and defend their goals. Changing your goals is normally a serious hit to future utility, from the current perspective. Something clearly to be avoided at all costs.

FWIW, Omohundro claims his results are pretty general - and I tend to agree with him. I don't see the use of an economic framework as a problem - microeconomics itself is pretty general and broadly applicable.