You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Qiaochu_Yuan comments on Against utility functions - Less Wrong Discussion

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 21 June 2014 09:31:24PM *  1 point [-]

The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.

That said, I concur with your basic point: universal overarching utility functions - not just small ones for a given situation, but a single large one for you as a human - are something humans don't, and I think can't, do - and realising how mathematically helpful it would be if they did still doesn't mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.

(And, I suspect, will merely leave you vulnerable to everyday human-level exploits - remember that the actual threat model we evolved in is beating other humans, and as long as we're dealing with humans we need to deal with humans.)

Comment author: Qiaochu_Yuan 22 June 2014 06:18:54PM *  3 points [-]

The idea is that the universe offers you Dutch-book situations

But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn't constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn't becoming perfect Bayesians.

Comment author: David_Gerard 23 June 2014 07:19:15AM *  0 points [-]

I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.