Tim_Tyler comments on Value is Fragile - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (88)
You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.
You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.