pjeby comments on The Domain of Your Utility Function - Less Wrong

24 Post author: Peter_de_Blanc 23 June 2009 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 24 June 2009 10:21:11PM 1 point [-]

Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.

Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.

I mean, sure, you could say that utility equals "minimum global error of all control systems", but it's rather ludicrous to expect this calculation to predict their actual behavior, since most of their "interests" operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?

Comment author: timtyler 25 June 2009 04:47:46PM 0 points [-]

I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.

Comment author: pjeby 25 June 2009 06:11:30PM 0 points [-]

But your choice of platforms is not without efficiency and complexity costs, since maximizers inherently "blow up" more than satisficers.