"However many ways there may be of being alive, it is certain that there are vastly more ways of being dead."
-- Richard Dawkins
In the coming days, I expect to be asked: "Ah, but what do you mean by 'intelligence'?" By way of untangling some of my dependency network for future posts, I here summarize some of my notions of "optimization".
Consider a car; say, a Toyota Corolla. The Corolla is made up of some number of atoms; say, on the rough order of 1029. If you consider all possible ways to arrange 1029 atoms, only an infinitesimally tiny fraction of possible configurations would qualify as a car; if you picked one random configuration per Planck interval, many ages of the universe would pass before you hit on a wheeled wagon, let alone an internal combustion engine.
Even restricting our attention to running vehicles, there is an astronomically huge design space of possible vehicles that could be composed of the same atoms as the Corolla, and most of them, from the perspective of a human user, won't work quite as well. We could take the parts in the Corolla's air conditioner, and mix them up in thousands of possible configurations; nearly all these configurations would result in a vehicle lower in our preference ordering, still recognizable as a car but lacking a working air conditioner.
So there are many more configurations corresponding to nonvehicles, or vehicles lower in our preference ranking, than vehicles ranked greater than or equal to the Corolla.
Similarly with the problem of planning, which also involves hitting tiny targets in a huge search space. Consider the number of possible legal chess moves versus the number of winning moves.
Which suggests one theoretical way to measure optimization - to quantify the power of a mind or mindlike process:
Put a measure on the state space - if it's discrete, you can just count. Then collect all the states which are equal to or greater than the observed outcome, in that optimization process's implicit or explicit preference ordering. Sum or integrate over the total size of all such states. Divide by the total volume of the state space. This gives you the power of the optimization process measured in terms of the improbabilities that it can produce - that is, improbability of a random selection producing an equally good result, relative to a measure and a preference ordering.
If you prefer, you can take the reciprocal of this improbability (1/1000 becomes 1000) and then take the logarithm base 2. This gives you the power of the optimization process in bits. An optimizer that exerts 20 bits of power can hit a target that's one in a million.
When I think you're a powerful intelligence, and I think I know something about your preferences, then I'll predict that you'll steer reality into regions that are higher in your preference ordering. The more intelligent I believe you are, the more probability I'll concentrate into outcomes that I believe are higher in your preference ordering.
There's a number of subtleties here, some less obvious than others. I'll return to this whole topic in a later sequence. Meanwhile:
* A tiny fraction of the design space does describe vehicles that we would recognize as faster, more fuel-efficient, safer than the Corolla, so the Corolla is not optimal. The Corolla is, however, optimized, because the human designer had to hit an infinitesimal target in design space just to create a working car, let alone a car of Corolla-equivalent quality. This is not to be taken as praise of the Corolla, as such; you could say the same of the Hillman Minx. You can't build so much as a wooden wagon by sawing boards into random shapes and nailing them together according to coinflips.
* When I talk to a popular audience on this topic, someone usually says: "But isn't this what the creationists argue? That if you took a bunch of atoms and put them in a box and shook them up, it would be astonishingly improbable for a fully functioning rabbit to fall out?" But the logical flaw in the creationists' argument is not that randomly reconfiguring molecules would by pure chance assemble a rabbit. The logical flaw is that there is a process, natural selection, which, through the non-chance retention of chance mutations, selectively accumulates complexity, until a few billion years later it produces a rabbit.
* I once heard a senior mainstream AI type suggest that we might try to quantify the intelligence of an AI system in terms of its RAM, processing power, and sensory input bandwidth. This at once reminded me of a quote from Dijkstra: "If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger." If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources. Intelligence, in other words, is efficient optimization. This is why I say that evolution is stupid by human standards, even though we can't yet build a butterfly: Human engineers use vastly less time/material resources than a global ecosystem of millions of species proceeding through biological evolution, and so we're catching up fast.
* The notion of a "powerful optimization process" is necessary and sufficient to a discussion about an Artificial Intelligence that could harm or benefit humanity on a global scale. If you say that an AI is mechanical and therefore "not really intelligent", and it outputs an action sequence that hacks into the Internet, constructs molecular nanotechnology and wipes the solar system clean of human(e) intelligence, you are still dead. Conversely, an AI that only has a very weak ability steer the future into regions high in its preference ordering, will not be able to much benefit or much harm humanity.
* How do you know a mind's preference ordering? If this can't be taken for granted, then you use some of your evidence to infer the mind's preference ordering, and then use the inferred preferences to infer the mind's power, then use those two beliefs to testably predict future outcomes. Or you can use the Minimum Message Length formulation of Occam's Razor: if you send me a message telling me what a mind wants and how powerful it is, then this should enable you to compress your description of future events and observations, so that the total message is shorter. Otherwise there is no predictive benefit to viewing a system as an optimization process.
* In general, it is useful to think of a process as "optimizing" when it is easier to predict by thinking about its goals, than by trying to predict its exact internal state and exact actions. If you're playing chess against Deep Blue, you will find it much easier to predict that Deep Blue will win (that is, the final board position will occupy the class of states previously labeled "wins for Deep Blue") than to predict the exact final board position or Deep Blue's exact sequence of moves. Normally, it is not possible to predict, say, the final state of a billiards table after a shot, without extrapolating all the events along the way.
* Although the human cognitive architecture uses the same label "good" to reflect judgments about terminal values and instrumental values, this doesn't mean that all sufficiently powerful optimization processes share the same preference ordering. Some possible minds will be steering the future into regions that are not good.
* If you came across alien machinery in space, then you might be able to infer the presence of optimization (and hence presumably powerful optimization processes standing behind it as a cause) without inferring the aliens' final goals, by way of noticing the fulfillment of convergent instrumental values. You can look at cables through which large electrical currents are running, and be astonished to realize that the cables are flexible high-temperature high-amperage superconductors; an amazingly good solution to the subproblem of transporting electricity that is generated in a central location and used distantly. You can assess this, even if you have no idea what the electricity is being used for.
* If you want to take probabilistic outcomes into account in judging a mind's wisdom, then you have to know or infer a utility function for the mind, not just a preference ranking for the optimization process. Then you can ask how many possible plans would have equal or greater expected utility. This assumes that you have some probability distribution, which you believe to be true; but if the other mind is smarter than you, it may have a better probability distribution, in which case you will underestimate its optimization power. The chief sign of this would be if the mind consistently achieves higher average utility than the average expected utility you assign to its plans.
* When an optimization process seems to have an inconsistent preference ranking - for example, it's quite possible in evolutionary biology for allele A to beat out allele B, which beats allele C, which beats allele A - then you can't interpret the system as performing optimization as it churns through its cycles. Intelligence is efficient optimization; churning through preference cycles is stupid, unless the interim states of churning have high terminal utility.
* For domains outside the small and formal, it is not possible to exactly measure optimization, just as it is not possible to do exact Bayesian updates or to perfectly maximize expected utility. Nonetheless, optimization can be a useful concept, just like the concept of Bayesian probability or expected utility - it describes the ideal you're trying to approximate with other measures.
I am suspicious of attempts to define intelligence for the following reason. Too often, they lead the definer down a narrow and ultimately fruitless path. If you define intelligence as the ability to perform some function XYZ, then you can sit down and start trying to hack together a system that does XYZ. Almost invariably this will result in a system that achieves some superficial imitation of XYZ and very little else.
Rather than attempting to define intelligence and move in a determined path toward that goal, we should look around for novel insights and explore their implications.
Imagine if Newton had followed the approach of "define physics and then move toward it". He may have decided that physics is the ability to build large structures (certainly an understanding of physics is helpful or required for this). He might then have spent all his time investigating the material properties of various kinds of stone - useful perhaps, but misses the big picture. Instead he looked around in the most unlikely places to find something interesting that had very little immediate practical application. That should be our mindset in pursuing AI: the scientist's, rather than the engineer's, approach.