timtyler comments on Applying utility functions to humans considered harmful - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (114)
There's no point in discussing "utility maximisers" - rather than "expected utility maximisers"?
I don't really agree - "utility maximisers" is a simple generalisation of the concept of "expected utility maximiser". Since there are very many ways of predicting the future, this seems like a useful abstraction to me.
...anyway, if you were wrapping a model a human, the actions would clearly be based on predictions of future events. If you mean you want the prediction process to be abstracted out in the wrapper, obviously there is no easy way to do that.
You could claim that a human - while a "utility maximiser" was not clearly an "expected utility maximiser". My wrapper doesn't disprove such a claim. I generally think that the "expected utility maximiser" claim is highly appropriate for a human as well - but there is not such a neat demonstration of this.