Johnicholas comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Johnicholas 12 May 2012 01:27:25PM 5 points [-]

There's two uses of 'utility function'. One is analogous to Daniel Dennett's "intentional stance" in that you can choose to interpret an entity as having a utility function - this is always possible but not necessarily a perspicuous way of understanding an entity - because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".

The second form is as an explicit component within an AI design. Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Comment author: NancyLebovitz 12 May 2012 04:11:27PM 3 points [-]

because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".

Is that a problem so long as some behaviors are preferred over others? You could have "is neutral about running in circles, but resists jumping up and down and prefers making abstract paintings".

Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Wouldn't that depend on the Tool-AI? Eliezer's default no-akrasia AI does everything it can to fulfill its utility function. You presumably want it to be as accurate as possible or perhaps as accurate as useful. Would it be a problem for it to ask for more resources? To earn money on its own initiative for more resources? To lobby to get laws passed to give it more resources? At some point, it's a problem if it's going to try to rule the world to get more resources.....

Comment author: CuSithBell 12 May 2012 04:39:31PM 6 points [-]

Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

Wouldn't that depend on the Tool-AI?

I think this is explicitly part of the "Tool-AI" definition, that it is not a Utility Maximizer.