...it's probably adaptation executer.
We often assume agents are utility maximizers. We even call this "rationality". On the other hand in our recent experiment nobody managed to figure out even approximate shape of their utility function, and we know about large number of ways how agents deviate from utility maximization. How goes?
One explanation is fairly obvious. Nature contains plenty of selection processes - evolution and markets most obviously, but plenty of others like competition between Internet forums in attracting users, and between politicians trying to get elected. In such selection processes a certain property - fitness - behaves a lot like utility function. As a good approximation a traits that give agents higher expected fitness survives and proliferates. And as a result of that agents that survive such selection processes react to inputs quite reliably as if they were optimizing some utility function - fitness of the underlying selection process.
If that's the whole story, we can conclude a few things:
- Utility maximization related to some selection process - evolutionary fitness, money, chance of getting elected - is very common and quite reliable.
- We will not find any utility maximization related outside selection process - so we are not maximizers when it comes to happiness, global well-being, ecological responsibility, and so on.
- If selection process isn't strong enough, we won't see much utility maximization. So we can predict that oligopolies are pretty bad at maximizing their market success - as pressure is very low.
- If selection process isn't running long enough, or rules changed recently, we won't see much utility maximization. So people are really horrible at maximizing number of their offspring in our modern environment with birth control and limitless resources. And we can predict that this might change given enough time.
- If input is atypical, even fairly reliable utility maximizers are likely to behave badly.
- Agents are not utility maximizers across selection processes. So while politicians can be assumed to maximize their chance of getting elected very well, they must be pretty bad at getting as maximizing their income, or number of children.
- However, selection processes often correlate - historically someone who made more money could afford to have more children, richer politicians can use their wealth to increase their chance of getting elected, companies with more political power can survive in marketplace better, Internet forums need money and users to survive, so must cross-optimize for both etc. This can give an illusion of one huge cross-domain utility function, but selection across domains is often not that strong.
- We are naturally bad at achieving our goals and being happy in modern environment. To have any chances, we must consciously research what works and what doesn't.
Re: On the other hand in our recent experiment nobody managed to figure out even approximate shape of their utility function
Er, that thread was pretty knackered - and I have posted about my utility function many times before - see: http://alife.co.uk/essays/nietzscheanism/
One of the distinctions that was not clear enough on that the thread, were the different types of utility functions. For example, I find it useful to talk about:
the function I feel ethically obliged to maximize, a.k.a. my philosophy.
the function(s) my behavior actually appears to maximize (approximately) at various levels of analysis.
the function I would maximize if I was more instrumentally rational, and/or less concerned about other people's utility.