Johnicholas comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
There's two uses of 'utility function'. One is analogous to Daniel Dennett's "intentional stance" in that you can choose to interpret an entity as having a utility function - this is always possible but not necessarily a perspicuous way of understanding an entity - because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".
The second form is as an explicit component within an AI design. Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.
Is that a problem so long as some behaviors are preferred over others? You could have "is neutral about running in circles, but resists jumping up and down and prefers making abstract paintings".
Wouldn't that depend on the Tool-AI? Eliezer's default no-akrasia AI does everything it can to fulfill its utility function. You presumably want it to be as accurate as possible or perhaps as accurate as useful. Would it be a problem for it to ask for more resources? To earn money on its own initiative for more resources? To lobby to get laws passed to give it more resources? At some point, it's a problem if it's going to try to rule the world to get more resources.....
I think this is explicitly part of the "Tool-AI" definition, that it is not a Utility Maximizer.