Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

denimalpaca comments on Could utility functions be for narrow AI only, and downright antithetical to AGI? - Less Wrong Discussion

5 Post author: chaosmage 16 March 2017 06:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread.

Comment author: denimalpaca 17 March 2017 05:51:57PM 0 points [-]

I think you're getting stuck on the idea of one utility function. I like to think humans have many, many utility functions. Some we outgrow, some we "restart" from time to time. For the former, think of a baby learning to walk. There is a utility function, or something very much like it, that gets the baby from sitting to crawling to walking. Once the baby learns how to walk, though, the utility function is no longer useful; the goal has been met. Now this action moves from being modeled by a utility function to a known action that can be used as input to other utility functions.

As best as I can tell, human general intelligence comes from many small intelligences acting in a cohesive way. The brain is structured like this, as a bunch of different sections that do very specific things. Machine models are moving in this direction, with the Deepmind Go neural net playing a version of itself to get better a good example.

Comment author: TheAncientGeek 20 March 2017 10:13:34PM 0 points [-]

You could model humans as having varying UFs, or having multiple UFs...or you could give up on the whole idea.

Comment author: denimalpaca 21 March 2017 06:46:20PM 0 points [-]

Why would I give up the whole idea? I think you're correct in that you could model a human with multiple, varying UFs. Is there another way you know of to guide an intelligence toward a goal?

Comment author: TheAncientGeek 22 March 2017 02:57:47PM 2 points [-]

The basic problem is the endemic confusion between the map, the UF as a way of modelling an entity, and the territory. the UF as an architectural feature that makes certain things happen.

The fact that there are multiple ways of modelling humans as UF-driven, and the fact that they are all a bit contrived, should be a hint that there may be no territory corresponding to the map.

Comment author: denimalpaca 22 March 2017 06:00:08PM 0 points [-]

Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map? Right now your statement doesn't have enough detail for me to be convinced that UF-driven humans are a bad model.

And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.