Edit: This is old material. It may be out of date.

Most posters here seem to agree1 that:

  • Intelligence at least human intelligence is an optimization process.
  • Evolution is an optimization process.
  • Other optimization processes may exist.

Taking these as a given in this thread, let me ask are markets a optimization process that should be thought of as distinct from evolution and intelligence? My intuitive responses was no. But thinking about it I made me notice I was confused. This lead me to believe that there is probably something interesting for me to learn by thinking a bit more about this.

A argument against this is that companies basically engage in a survival of the fittest contest or that markets are just a organization of the optimizing power of human intelligence. But (please assume the smart version of the previous arguments since I wanted to save space and time by relying on your inference and your zombie argument creation skills) isn't it so that one optimization process might use another optimization process somewhere on the grit level while still not being disputed as a genuinely different optimization process?

Perhaps the condition is that the process must be able to work without the "use" of another process. A human may be predisposed to use his intelligence to help improve his own reproductive fitness but there is nothing preventing evolution in the absence of intelligence.

A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.

Evolution never worked with agents as exist in the theoretic approximation of real world markets. It seems to me some of the strategies the agents would take up would start to break down the rules that make the market possible.

Do the results markets produce warrant them being included in a new family2 of optimization processes besides evolution and intelligence?


Notes:

1. I lean towards but don't feel comfortable adding a fourth point of "consensus":

  • the space of all optimization processes is probably quite a bit larger than just the two.

2. I think differences in the various kinds of Evolution (Darwinian, Lamarckian, ect.) and Intelligence that seem possible or that we see in the real world might be better thought of as two families of optimization processes rather than two homogeneous blocks.

 

 

 

 

 

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 12:11 PM

A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.

Eric Baum's papers, among others, show this kind of thing applied to AI. There doesn't seem to have been much followup.

Comparative Ecology: A Computational Perspective compares this idea to the human economy and biological evolution and says the idealized computer version ought to be, well, more ideal as an optimization process.

Do the results markets produce warrant them being included in a new family of optimization processes besides evolution and intelligence?

I would say yes. For instance, markets generate prices in a way that is distinct from the way human minds measure "value".

I would also say that individual organizations can have optimization processes based on internal dynamics/outside data that are distinct from evolution or human minds.

From looking at the comments, I see several different sets of criteria being used for what counts as an optimization process. I think we should taboo it. (Admittedly, some people have already done so.)

A argument against this is that companies basically engage in a survival of the fittest contest

Counterpoint from the archives: "No Evolutions for Corporations or Nanodevices"

Are markets therefore a special case of evolution constrained to a certain class of agents? Or do the results they produce warrant them being included in a new family of optimization processes besides evolution and intelligence[?]

I would say No and Yes, respectively: it is not useful to think of markets as a special case of evolution. It's true that an English-language description of both processes is likely to involve the word competition, but the underlying details are very different. Evolutionary theory concerns itself with things that replicate and the consequences thereof, whereas economics concerns itself with agents that trade and the consequences of that.

Perhaps the condition is that the process must be able to work without the "use" of another process.

I don't agree with this condition. Microeconomics does require agentlike components: market participants are assumed to have preferences that they try to satisfy subject to some sort of budget constraint; if you don't have something that at least approximates that structure, then you don't really have anything we would want to call a market.

And yet despite being made out of smaller optimizers, it does seem fair to say that markets are a kind of optimization process in the sense that the system as a whole produces highly nonrandom outcomes. We have theorems that say if you assume that utility-maximizing agents with complete, transitive, continuous, monotonic, convex utility functions over some finite set of commodities trade under perfect information, then they reach a Pareto-optimal result where price is proportional to marginal utility, &c. Of course the assumptions made by such models are ludicrously unrealistic when it comes to actual markets made out of humans, but they illustrate the point that market forces are doing something interesting that isn't a simple property of any of the market's component agents; you could say it is emergent (in a nonmysterious sense).

[-][anonymous]13y00

Counterpoint from the archives: "No Evolutions for Corporations or Nanodevices"

Thanks for bringing up the link, I wanted to mention it, but I didn't find it on Overcoming Bias (for some reason I thought the piece was written by Robin Hanson).

I would say No and Yes, respectively: it is not useful to think of markets as a special case of evolution. It's true that an English-language description of both processes is likely to involve the word competition, but the underlying details are very different. Evolutionary theory concerns itself with things that replicate and the consequences thereof, whereas economics concerns itself with agents that trade and the consequences of that.

It is I assume also not useful to think of markets in terms of intelligence, at least no more than one could speak of intelligence in the process of evolution.

Markets are processes that've been designed in order to optimize the distribution of resources. I can see asking if they work, but asking if a process designed to optimize something can be called an optimization process seems like a strange question.

This doesn't seem to be an especially large question. What do markets maximise, and by what criteria could those things be considered special?

Origin of Wealth is a really interesting book on this (economics, evolution, and local/global maxima)

The answer seems to depend on where you put the edges of optimization processes. The market is a mechanism that solves the problems of production and distribution, and it learns from the past. I think "mechanism to solve problem + learning" is enough to qualify as an optimization process, but if you're using it in a different technical sense you can sensibly disagree.

The whole world is an optimisation process.

See the principle of maximum entropy and the principle of least action for details about that.

There are surely lots of optimising processes within it - e.g. see this list.

Konkvistador says:

Evolution is a[n] optimization process.

Evolution is too slow. Moreover, evolution embraces the greedy algorithm:

evolution has no foresight, and only takes the next greedy local step.

Evolution works but that doesn't mean it is optimal. It is, I believe, inefficient.

Timtyler says:

The whole world is an optimisation process.

Huh? Does that include the human mind? The horrible geography and weather in some places of the world where very few species can survive? Natural disasters?

This post (anthropomorphic optimism) may interest you.

The whole world is an optimisation process.

Huh? Does that include the human mind? The horrible geography and weather in some places of the world where very few species can survive? Natural disasters?

Yes, all of those. Water flowing downhill is an optimisation process. We understand the microscopic mechanisms in some cases - for example, the ones spelled out by Dewar (see refs below) - and it has long been understood that natural selection applies to many non-biological systems.

This post (anthropomorphic optimism) may interest you.

You are suggesting that my views on this topic are anthropomorphic?!? Uh, they are the facts of the matter.

  • Dewar, R. C., 2003, "Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states," J. Phys. A: Math.Gen. 36: 631-41.

  • Dewar, R. C., 2005, "Maximum entropy production and the fluctuation theorem," J. Phys. A: Math.Gen. 38: L371-L381.

Water flowing downhill is an optimisation process. Do you mind telling me what does that optimise? In other words, what is the objective function? Water flowing downhill because of gravity. It needs not optimise anything.

Of course, certain intrinsic properties may make some non-living things survive better than other (long half lives, water resistance, etc). But you don't need to give them any objective as though they have a mind. When you say 'optimisation,' you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more 'desirable' than others.

I understand that it is just the human mind that makes judgment about 'desirability'. Yes, I'm suggesting that your views are rather anthropomorphic.

1) Systems do collapse (political systems collapse due to wars, lack of social capital, etc; financial systems collapse due to mismanagement, failure of the invisible hand; the earth may collapse due to anthropogenic climate change; stars do explode). And this means optimisation, if any, fails. If you want to argue that systems collapse in order to optimise larger systems, please come up with some system-design explanations. I believe that a good optimisation process in a well-designed system is one-directional, at least in the short run. You don't destroy a building to recreate it very soon later unless you have bad design or miscalculation of requirements. But the nature is sometimes stupid enough to destroy a forest in a flash and recreate something very similar several years later.

2) An optimal solution should be preventive rather than corrective. If the objective function of the whole world is ecological stability, then maybe humans shouldn't be intelligent enough to think and invent something that harms the environment. And maybe there shouldn't be stuffs like bush fires in forests that take a century to regrow. Or oil spill that kills planktons. Those things hurt more than benefit the environment. What do those events optimise? Please let me know.

3) The fluctuation theorem, the Gaia hypothesis, etc. are kind of depicting self-regulating systems. (Natural selection is not. Some species adapt better than others and this may be destructive in the long run.) And self-regulating systems are not necessarily self-optimising unless the objective function is definable, defined and maximised when the equilibrium state is reached. And if there are multiple possible equilibria in the system, self-regulating systems may get stuck at a non-optimal equilibrium. I'm not talking about the thermodynamic equilibrium here. I'm talking about systems in general (young democracies seem to be good examples).

4) I don't see a link between evolution and systematic optimisation. Evolution is locally, a greedy algorithm. In computer science, greedy algorithm doesn't normally give the best results. It can give the worst possible result, indeed. Moreover, organisms adapt for themselves, not for the system. They optimise their survival probability (though the process is kind of slow), and this could bring the ecology from balance to imbalance. This could eventually harm the adapted species themselves.

5) I'm not sure whether the Newcomb's problem is sorta contradict the natural selection, if applied to computer systems. In this environment, AI that chooses options randomly would fare better than intelligent AI that understands strategic dominance in game theory.

Water flowing downhill is an optimisation process.

Do you mind telling me what does that optimise? In other words, what is the objective function?

In a word, entropy.

Water flowing downhill because of gravity. It needs not optimise anything.

Water flowing downhill does optimise a function, though. The laws of physics are microscopically reversible - and so are exactly as compatible with water flowing uphill as down. Water flows downhill because of statistical mechanics.

When you say 'optimisation,' you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more 'desirable' than others. I understand that it is just the human mind that makes judgment about 'desirability'. Yes, I'm suggesting that your views are rather anthropomorphic.

You are not using the word 'optimization' in its mathematical sense - whereas I am.

Water flowing downhill is an optimisation process.

Do you mind telling me what does that optimise? In other words, what is the objective function?

I've never seen an academic article saying that the world is maximising entropy (in the thermodynamic sense). I understand that the second law of thermodynamics hints that entropy is a fairly closed system should increase over time.

When a process (rather) consistently increases (or decreases) the value of a variable, it doesn't necessarily optimise it! Like when you see a nation's positive GDP growth from year to year, you can't say the nation is optimising its GDP. It is tempting, but still it is not a sufficient condition to say it is an optimisation process.

When you say 'optimisation,' you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more 'desirable' than others. I understand that it is just the human mind that makes judgment about 'desirability'. Yes, I'm suggesting that your views are rather anthropomorphic.

You are not using the word 'optimization' in its mathematical sense - whereas I am.

In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences ('best' solution --- isn't that subjective?), and this is sometimes inherent, sometimes explicit.

I use the word 'optimisation' in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can't take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory... although the phenomenon isn't efficient in giving the optimal outcome.

Suppose one day you observe the global economy. You see the trend that global production, in real terms, is increasing. Can you conclude that the world's economy is an optimisation process of output? No! It is just a candidate story, not fact.

You are suggesting that my views on this topic are anthropomorphic?!? Uh, they are the facts of the matter.

Definitely not facts.

The Gaia hypothesis is the way some biologists see how the world works. "Optimising Gaia" is a story. The strongest hypothesis among Gaia hypotheses. It is like Earth has a mind and tries to adjust herself to be biologically favourable (the objective function here is ecological). Regardless, the truth remains. All versions of the Gaia hypotheses are maps, not territories.

You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory... although the phenomenon isn't efficient in giving the optimal outcome.

The phenomenon isn't always "efficient" at dissipating entropy - because of constraints imposed by physical law. Also, in general, optimisation processes are not guaranteed to find the "optimal outcome" - due to local maxima. I am not making the idea of entropy maximisation up - there's a big literature about it dating back to 1922. Check my references.

I've never seen an academic article saying that the world is maximising entropy (in the thermodynamic sense).

Right. Well, I already gave some references about that further up the thread - these ones:

However, there are a large number of other such articles. E.g. see:

For more introductory material, perhaps see:

...and for more references, perhaps try the ones on: http://originoflife.net/bright_light/

In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. ...

While I generally agree with you in this debate, and disagree with Tim Tyler's claims that spontaneous dissipation of free energy exemplifies Nature's optimization of entropy production, I have to agree with ata. There is an important distinction between an optimization problem and an optimization process. And the distinction is definitely not that the process generates the solution to the problem.

You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory... although the phenomenon isn't efficient in giving the optimal outcome.

Yep, that is what is happening, alright. But this isn't quite as disreputable as you make it sound. Take, for example, biological evolution under natural selection - the canonical example of an 'optimization process' as the phrase is used here. R.A. Fisher proved that (under the admittedly unrealistic assumption of an unchanging environment) the average 'fitness' of the organisms in a population subject to natural selection can only increase, so long as the mutation rate is moderate. So what is 'fitness'? Well, it is an 'objective function' which we generate from the phenomenon - the fitness of an individual organism is simply a count of surviving offspring and the fitness of a 'type' is the average fitness of the individuals of that type.

So, this 'fitness' can only increase. But there is no guarantee that the process generating the increase is efficient, nor that some 'optimal' level of 'fitness' will ever be reached. Nonetheless, the local usage designates natural selection as an 'optimization' process. We are aware that we are flirting with teleological language, here, but it is only a flirtation. We know what we are doing. We are not in danger of being seduced.

So, this 'fitness' can only increase.

Note that, conventionally, fitnesses can decline - much as a hill climber can be climbing a hill on a mountain that is rapidly sinking into the sea.

Note that, conventionally, fitnesses can decline

Yes, I did notice that. That is why I wrote spelling out the assumptions:

R.A. Fisher proved that (under the admittedly unrealistic assumption of an unchanging environment) the average 'fitness' of the organisms in a population subject to natural selection can only increase, so long as the mutation rate is moderate.

Note that, conventionally, fitnesses can decline

Yes, I did notice that.

Ah! Fisher's fictional fitnesses! My bad; I missed that context - apologies.

You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory... although the phenomenon isn't efficient in giving the optimal outcome.

Yep, that is what is happening, alright. [...]

What the..?

That is definitiely not what is happening - as I would have expected you to be aware of by now.

Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.

I meant my "Yep" to apply to shadow's denunciation of the practice of extracting the objective function from observation of the phenomenon - particularly as it applies to the two optimization processes of greatest interest to LW: natural selection and human rationality.

In constructing the objective functions that we use to explain rational behavior, we use a concept of "revealed preference". That is, we observe the behavior - the choices that a rational agent makes - in order to explain the behavior. In truth, from shadow's viewpoint, we are not explaining behavior at all - we are merely explaining the consistency of behavior over time.

Similarly, when analyzing natural selection, we need to observe the deaths and reproductions of organisms in order to construct our 'fitness' function - the very thing that we claim that the process optimizes. We are rescued from the well-known charge of 'tautology' only by the fact that we are explaining/predicting the fitness of the current generation of organisms, based on the observation of the fitness of prior generations. Not really a tautology, but also not really an explanation of as much as might be naively thought.

So, to my opinion, shadow's critique is quite correct when applied to the important optimization processes of natural selection and rational behavior/cognition. But the critique is not crippling.

But now, let us look at the kinds of 'optimization processes' that you were describing. Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don't need to revive that debate. But you may be correct if you are claiming that shadow's 'fitting the theory to the observations' critique does not apply at all to your examples of 'optimization processes'. So, I apologize if it appeared that I was tarring them with the same shadow-brush which I applied to NS and rationality.

Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don't need to revive that debate.

OK. From this - and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn't the spot to go over that.

However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.

did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.

I did not agree, but I don't think you should say something now. I don't think it is useful to call the natural progression to a state of minimum free energy 'an optimization process'.

Admittedly, it does share some features with rational decision making and natural selection - notably the existence of an 'objective function' and a promise of monotone progress toward the 'objective' without the promise of an optimal final result within a finite time.

But it lacks a property that I will call 'retargetability'. By adjusting the environment we can redefine fitness - causing NS to send a population in a completely different evolutionary direction. We are still 'optimizing' fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.

Similarly, by training a rational agent to have different tastes, we can redefine utility - causing rational decision making to choose a completely different set of actions. We are still 'optimizing' utility, and doing so using the same mechanisms, but the meaning of utility has changed.

I find it more difficult to imagine "retargeting" the meaning of 'downhill' for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, ... well then i might agree to call that process an optimization process.

You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it - and adding more dimensions is not that conceptually challenging.

I am not sure what optimisation problem can't easily have cold water poured on it ;-)

Also, "retargetability" sounds as though it is your own specification.

I don't see much about being "retargetable" here. So, it seems as though this is not a standard concern. If you wish to continue to claim that "retargetability" is to do with optimisation, I think you should provide a supporting reference.

FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT - which is a different idea, with less to do with the concept of optimisation. The idea of "maximising entropy" constrains expectations a lot more than the second law alone does.

Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.

My jaw dropped - since I was unable to find a sympathetic reading of your comment. You seemed to be expressing approval of material which I disapproved of.

However, I think I have now managed to find a plausible sympathetic reading - and it turns out that we don't really have a disagreement.

In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences ('best' solution --- isn't that subjective?), and this is sometimes inherent, sometimes explicit.

I use the word 'optimisation' in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can't take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory... although the phenomenon isn't efficient in giving the optimal outcome.

I'm pretty sure you're still not using the word "optimization" in the sense of the phrase "optimization process" as used on Less Wrong. An optimization process doesn't have to be a process that maximizes an explicitly-defined utility function; the function can be implicit in its structure or behaviour.

It's not really the same as the sense of "optimization" described in the aforelinked Wikipedia article, which isn't the subject of this discussion post. The terminology of "optimization processes" is used to analyze dynamics acting within a system.