...it's probably adaptation executer.

We often assume agents are utility maximizers. We even call this "rationality". On the other hand in our recent experiment nobody managed to figure out even approximate shape of their utility function, and we know about large number of ways how agents deviate from utility maximization. How goes?

One explanation is fairly obvious. Nature contains plenty of selection processes - evolution and markets most obviously, but plenty of others like competition between Internet forums in attracting users, and between politicians trying to get elected. In such selection processes a certain property - fitness - behaves a lot like utility function. As a good approximation a traits that give agents higher expected fitness survives and proliferates. And as a result of that agents that survive such selection processes react to inputs quite reliably as if they were optimizing some utility function - fitness of the underlying selection process.

If that's the whole story, we can conclude a few things:

  • Utility maximization related to some selection process - evolutionary fitness, money, chance of getting elected - is very common and quite reliable.
  • We will not find any utility maximization related outside selection process - so we are not maximizers when it comes to happiness, global well-being, ecological responsibility, and so on.
  • If selection process isn't strong enough, we won't see much utility maximization. So we can predict that oligopolies are pretty bad at maximizing their market success - as pressure is very low.
  • If selection process isn't running long enough, or rules changed recently, we won't see much utility maximization. So people are really horrible at maximizing number of their offspring in our modern environment with birth control and limitless resources. And we can predict that this might change given enough time.
  • If input is atypical, even fairly reliable utility maximizers are likely to behave badly.
  • Agents are not utility maximizers across selection processes. So while politicians can be assumed to maximize their chance of getting elected very well, they must be pretty bad at getting as maximizing their income, or number of children.
  • However, selection processes often correlate - historically someone who made more money could afford to have more children, richer politicians can use their wealth to increase their chance of getting elected, companies with more political power can survive in marketplace better, Internet forums need money and users to survive, so must cross-optimize for both etc. This can give an illusion of one huge cross-domain utility function, but selection across domains is often not that strong.
  • We are naturally bad at achieving our goals and being happy in modern environment. To have any chances, we must consciously research what works and what doesn't.
New Comment
24 comments, sorted by Click to highlight new comments since:
  • Utility maximization related to some selection process - evolutionary fitness, money, chance of getting elected - is very common and quite reliable.
  • We will not find any utility maximization related outside selection process - so we are not maximizers when it comes to happiness, global well-being, ecological responsibility, and so on.

This strikes me as a key insight.

(Not phrased as precisely as it could be, but still a key insight.)

Question: Can't all possible behavior be described as maximizing some function subject to some constraints?

Yep, which is why it's important that the function be specified beforehand. You can always pick some function that matches the result precisely.

The key is to either pre-establish the function, or possess logical criteria that determine whether a postulated function is worth matching.

nitpick:

oligopolies are pretty bad at maximizing their market success I think they are pretty good at it, but maybe you don't mean what I mean.

In the simple model, there are three things that might be maximized.

  1. Individual producers profits,
  2. the sum of individual producers profits aka producer surplus, or
  3. the sum of producer surplus and consumer surplus.

I think its useful to model 1. individual producers as profit maximizers across a wide variety of markets, including oligopolies.

  1. producer surplus is generally not maximized, but will be actually larger with an oligopoly than in a competitive market
  2. the sum of producer surplus and consumer surplus can be understood to be maximized in a competitive market

Perhaps none of this is in conflict with anything you said, afterall, you explicitly pointed out that the market creates/identifies maximizers, I just couldn't tell what you meant in the sentence I quoted. Disclaimer: more complex models can be more accurate, my point is only that these simple models of maximization are useful.

[-]taw10

I meant I'm predicting that modeling monopolies/oligopolies as entities that try to optimize their profit is going to be unsuccessful.

I predict they will be very successful at creating barriers to entry (oligopolies that do that better stay in game longer), and they will be bad at responding to market changes in a way traditional economics claims they would (difference in profits matters very little to their survivability, profits that are too high might even encourage entry of competitors what would threaten their status).

I meant I'm predicting that modeling monopolies/oligopolies as entities that try to optimize their profit is going to be unsuccessful.

I agree that fine-grained predictions of a firm or an industry's behavior will not be possible with such a model. But whether the model is good or not depends on what you're comparing it to and what you're using it for. So what would you like to compare it to? What is your preferred model?

To be more specific, lets take Microsoft. I think that for many purposes, it is useful to think of Microsoft as attempting to maximize long-term profits.

Re: We will not find any utility maximization related outside selection process - so we are not maximizers when it comes to happiness, global well-being, ecological responsibility, and so on.

Is exhaustive search a "selection process"? What about random search?

These are certainly optimisation strategies - and if they are "selection processes" then are there any search strategies that are not selection processes? If not, then "selection process" has become a pretty meaningless term.

It's like saying that you can't find an optimum unless you perform a search for it - which is hardly much of an insight.

Search, or at least its results, is what selection works on. You could even think of evolution as a dual process with mutations as searches of possible genetic combinations followed by selection for survival and reproduction.

I strongly recommend Jonathan Baron's "Thinking and Deciding"; he conceptualizes all thinking, decision making, creativity as the dual process of searching and selection. It's a very interesting book. (I'm reading an older edition and am not yet finished, so I don't know how well he makes the case in total, or how he may have modified his ideas for later editions. But what I have read so far is fascinating.)

So... to return to my unanswered questions:

Is exhaustive search a "selection process"? What about random search?

If yes, is there any search strategy that is not a "selection process"? (If no, then what is it?) Otherwise, "selection process" is just a rather useless synonym for "search", and the cited thesis just says you can't find an optimum unless you actually look for it.

If no, that defeats the cited thesis - that optimisation only results from selection processes - since exhaustive search optimises functions fine.

[-]Cyan00

It's a prediction -- an empirical claim, not a definition.

The post talks about "selection processes" without saying what that term actually means. If you think for a moment about what that term means, the claim seems likely to either be wrong, or trivially true.

[-]Cyan00

I took "selection processes" to mean "natural selection".

That is probably not what it means. There are various definitions of selection - e.g. see Hull 1988 for one example:

Selection: 'a process in which the differential extinction and proliferation of interaction causes the differential perpetuation of the replicators that produced them'.

Having even one in a billion chance to invent agriculture or flight, so that the success can get selected, is already a tremendous optimization power. The presence of selection doesn't mean that selection is the process that does the optimizing, and the rarity of success in the absence of selection doesn't mean that optimization isn't there.

Optimization may just not be apparent until a new selection pressure finds its sole survivors. If optimization wasn't there, selection would just eliminate everyone (and if it's not fatal, you just won't notice a new niche).

[-]Cyan00

I don't think it make sense to call the mere possibility of something "optimization power". In what sense is a possibility a "success" in the absence of a criterion for judging it so? Nor do I think it makes sense to assert that selection, a sequential process whose action increases (on average) a particular function, is not "do[ing] the optimizing". This is semantics, but fairly important semantics, I think.

You can't force cats to invent general relativity. Without human mind, it's impossible, while with human mind it's merely rare.

[-]Cyan00

You can't force cats to invent general relativity.

Given enough resources, time, and cats, I'm pretty sure I could.

ETA: That was not merely a joke, but it was too glib; I should make the point explicit. It's that with enough time to get enough variation and appropriate selection, many things are possible. A concept of optimization power which is only about possibility but takes no note of the mechanics of descent with modification is not useful. ETA2: Outside of selection processes, I think a concept of optimization power needs to take note of the rate of change in order to be useful.

Re: On the other hand in our recent experiment nobody managed to figure out even approximate shape of their utility function

Er, that thread was pretty knackered - and I have posted about my utility function many times before - see: http://alife.co.uk/essays/nietzscheanism/

One of the distinctions that was not clear enough on that the thread, were the different types of utility functions. For example, I find it useful to talk about:

  1. the function I feel ethically obliged to maximize, a.k.a. my philosophy.

  2. the function(s) my behavior actually appears to maximize (approximately) at various levels of analysis.

  3. the function I would maximize if I was more instrumentally rational, and/or less concerned about other people's utility.

This is a very important point. I don't see how we can think sensibly about human utility without distinguishing these.

Another question that occurs to me, if (2) differs from (1) and (3), when do we call it akrasia and when do we call it self-deception?

Ben calls those "implicit" and "explicit" goals here:

http://cosmistmanifesto.blogspot.com/2009/01/goals-explicit-and-implicit.html

Not great terminology - can we do better?

Another big one is: the goal(s) we want others to believe we have. Often "save the whales" - or some other piece of selfless signalling.

Have you donated any sperm recently?

The answer to that question is not publicly available.

However, I do have a section about sperm donation in the FAQ:

http://alife.co.uk/essays/nietzscheanism/faq/#sperm

Re: So people are really horrible at maximizing number of their offspring in our modern environment with birth control and limitless resources.

Which people? There are about 6,783,421,727 people on the planet - there has evidently been lots of highly-successful baby-making recently by many members of the population.

Similarly, resources are not "limitless". With limitless resources, winning the lottery would make no difference. Most humans have always been resource-limited.