taw comments on Expected futility for humans - Less Wrong

11 [deleted] 09 June 2009 12:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread.

Comment author: taw 09 June 2009 03:36:43PM 1 point [-]

I don't agree with your arguments. First, nobody is proposing infinitely accurate utility function, just that a rough utility function is a good approximation of human behaviour descriptively and prescriptively.

As for your particular examples:

  • I don't see how being stupid about reality of relationships is adding much value. You should be aware what are the chances of infidelity etc. If knowing these chances you decide not to monitor your lover, that's a purely consequentialist decision.
  • Moderate deontology can be emulated in consequentialism by simply assigning values to following and breaking rules. I don't think anybody's values are truly absolute.
  • "Delusions" can be handled by taking an outside view and adding extra terms to the function, for example using all research of what influences our happiness.

None of them is terribly convincing.

And arguing from consequences - the entire field of economics in pretty much all its forms is based on assumption that utility maximization is a good approximation of human behaviour. If utility funcitons aren't even that much, then the entire economics is worthless almost automatically. It doesn't seem to be entirely worthless, so utility functions seem to have some meaning.

Comment deleted 09 June 2009 05:49:17PM [-]
Comment author: taw 10 June 2009 08:21:31AM 0 points [-]

Aggregation wouldn't really work unless utility function was a pretty decent approximation, and its errors were reasonably random.

Comment deleted 10 June 2009 08:58:24AM [-]
Comment author: orthonormal 10 June 2009 08:37:33PM 0 points [-]

Good point, especially when it comes to markets. You can have a lot of people acting in predictably irrational ways, and a few people who see an inefficiency and make large sums of money off of it, and the net result is a quite rational market.

Comment author: taw 10 June 2009 10:02:19AM 0 points [-]

Average of large number of functions that look nothing like U has little reason to look much like U. The fact that something like U turns out repetitively needs an explanation.

It's true that usually only a small portion of human behaviour is usually modeled at time, but utility maximization is composable, so you can take every single domain where utility maximization works, and compose it into one big utility maximization model - mathematically it should work (with some standard assumptions about types of error we have in small domain models, assumptions which might be false).

Comment deleted 10 June 2009 10:29:37AM [-]
Comment author: taw 10 June 2009 11:04:52AM 0 points [-]

What I was trying to do was more trying to figure out rough approximation of my utility function descriptively, to see if any of my actions are extremely irrational - like wasting too much time/money on something I care about very little, or not spending some time/money on something I care about a lot.

Comment deleted 10 June 2009 11:24:31AM *  [-]
Comment author: taw 10 June 2009 11:54:22AM 0 points [-]

Approximation is likely to be a list of "I value event X relative to default state at Y utilons", following economic tradition of focusing on the marginal. Skipping events from this list doesn't affect comparisons between events on the list.

Comment deleted 10 June 2009 12:02:08PM *  [-]