David_Gerard comments on Delayed Gratification vs. a Time-Dependent Utility Function - Less Wrong

2 Post author: momothefiddler 06 May 2012 12:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread.

Comment author: David_Gerard 06 May 2012 07:47:26AM 1 point [-]

Humans strike me as being much more like state machines than things with utility functions (c.f. you noting your utility function changing when you actually act on it). How do you write a function for the output of a state machine?

Comment author: Random832 11 May 2012 02:33:32AM 1 point [-]

How do you write a function for the output of a state machine?

Monads.

Comment author: David_Gerard 11 May 2012 07:22:31AM -1 points [-]

"What's your utility function?"
"This Haskell program."

Does the use of the word "function" in "utility function" normatively include arbitrary Turing-complete things?

Comment author: Random832 14 May 2012 05:22:16PM *  0 points [-]

I don't even know any Haskell - I just have a vague idea that a monad is a function that accepts a "state" as part of its input, and returns the same kind of "state" as part of its output. But even so, the punchline was too good to resist making.

Comment author: Vladimir_Nesov 14 May 2012 05:42:10PM *  1 point [-]

And axiom schemata of ZFC are more like scratches on paper than infinite sets. Humans are something that could be interpreted as associated with a utility function over possible states of something, but this utility function is an abstract structure, not something made out of atoms or even (a priori) computable. It can be reasoned about, but if it's too complicated, it won't be possible to make accurate inferences about it. Descriptive utility functions are normally simple summaries of behavior that don't fit very well, and you can impose arbitrary requirements on how these are defined.

Comment author: David_Gerard 14 May 2012 06:42:56PM -1 points [-]

At the moment I'm picturing a state machine where each state is a utility function (of a fairly conventional type, a bunch of variables go in and you get a "utility" value out) but if you hit a particular range of values the state, and hence function, changes. Not that I'm sure how to make this hypothesis rigorous enough even to falsify ...