Mitchell_Porter comments on What I Think, If Not Why - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 December 2008 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Mitchell_Porter 12 December 2008 12:14:00PM 1 point [-]

I don't understand the skepticism (expressed in some comments) about the possibility of a superintelligence with a stable top goal. Consider that classic computational architecture, the expected-utility maximizer. Such an entity can be divided into a part which evaluates possible world-states for their utility (their "desirability"), according to some exact formula or criterion, and into a part which tries to solve the problem of maximizing utility by acting on the world. For the goal to change, one of two things has to happen: either the utility function - the goal-encoding formula - is changed, or the interpretation of that formula - its mapping onto world-states - is changed. And it doesn't require that much intelligence to see that either of these changes will be bad, from the perspective of the current utility function as currently interpreted. Therefore, preventing such changes is an elementary subgoal, almost as elementary as physical self-preservation.