David_Gerard comments on Delayed Gratification vs. a Time-Dependent Utility Function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (57)
Humans strike me as being much more like state machines than things with utility functions (c.f. you noting your utility function changing when you actually act on it). How do you write a function for the output of a state machine?
Monads.
"What's your utility function?"
"This Haskell program."
Does the use of the word "function" in "utility function" normatively include arbitrary Turing-complete things?
I don't even know any Haskell - I just have a vague idea that a monad is a function that accepts a "state" as part of its input, and returns the same kind of "state" as part of its output. But even so, the punchline was too good to resist making.
And axiom schemata of ZFC are more like scratches on paper than infinite sets. Humans are something that could be interpreted as associated with a utility function over possible states of something, but this utility function is an abstract structure, not something made out of atoms or even (a priori) computable. It can be reasoned about, but if it's too complicated, it won't be possible to make accurate inferences about it. Descriptive utility functions are normally simple summaries of behavior that don't fit very well, and you can impose arbitrary requirements on how these are defined.
At the moment I'm picturing a state machine where each state is a utility function (of a fairly conventional type, a bunch of variables go in and you get a "utility" value out) but if you hit a particular range of values the state, and hence function, changes. Not that I'm sure how to make this hypothesis rigorous enough even to falsify ...