Larks comments on Utility Maximization and Complex Values - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
"Rational expected-utility-maximizing agents get to care about whatever the hell they want." - a good heuristic to bear in mind. There really are an awful lot of orderings on possible worlds, and if value is complex, your utility function* probably isn't linear.
*usual disclaimers apply about not actually having one.