Tim_Tyler comments on Value is Fragile - Less Wrong

41 Post author: Eliezer_Yudkowsky 29 January 2009 08:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 30 January 2009 10:35:06PM -1 points [-]

But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.

You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.