Previously in series: Aiming at the Target
Yesterday I spoke of how "When I think you're a powerful intelligence, and I think I know something about your preferences, then I'll predict that you'll steer reality into regions that are higher in your preference ordering."
You can quantify this, at least in theory, supposing you have (A) the agent or optimization process's preference ordering, and (B) a measure of the space of outcomes - which, for discrete outcomes in a finite space of possibilities, could just consist of counting them - then you can quantify how small a target is being hit, within how large a greater region.
Then we count the total number of states with equal or greater rank in the preference ordering to the outcome achieved, or integrate over the measure of states with equal or greater rank. Dividing this by the total size of the space gives you the relative smallness of the target - did you hit an outcome that was one in a million? One in a trillion?
Actually, most optimization processes produce "surprises" that are exponentially more improbable than this - you'd need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare. So we take the log base two of the reciprocal of the improbability, and that gives us optimization power in bits.
This figure - roughly, the improbability of an "equally preferred" outcome being produced by a random selection from the space (or measure on the space) - forms the foundation of my Bayesian view of intelligence, or to be precise, optimization power. It has many subtleties:
(1) The wise will recognize that we are calculating the entropy of something. We could take the figure of the relative improbability of "equally good or better" outcomes, and call this the negentropy of the system relative to a preference ordering. Unlike thermodynamic entropy, the entropy of a system relative to a preference ordering can easily decrease (that is, the negentropy can increase, that is, things can get better over time relative to a preference ordering).
Suppose e.g. that a single switch will determine whether the world is saved or destroyed, and you don't know whether the switch is set to 1 or 0. You can carry out an operation that coerces the switch to 1; in accordance with the second law of thermodynamics, this requires you to dump one bit of entropy somewhere, e.g. by radiating a single photon of waste heat into the void. But you don't care about that photon - it's not alive, it's not sentient, it doesn't hurt - whereas you care a very great deal about the switch.
For some odd reason, I had the above insight while watching X TV. (Those of you who've seen it know why this is funny.)
Taking physical entropy out of propositional variables that you care about - coercing them from unoptimized states into an optimized states - and dumping the entropy into residual variables that you don't care about, means that relative to your preference ordering, the total "entropy" of the universe goes down. This is pretty much what life is all about.
We care more about the variables we plan to alter, than we care about the waste heat emitted by our brains. If this were not the case - if our preferences didn't neatly compartmentalize the universe into cared-for propositional variables and everything else - then the second law of thermodynamics would prohibit us from ever doing optimization. Just like there are no-free-lunch theorems showing that cognition is impossible in a maxentropy universe, optimization will prove futile if you have maxentropy preferences. Having maximally disordered preferences over an ordered universe is pretty much the same dilemma as the reverse.
(2) The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities. To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around. This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.
Ah, but how do you know a mind's preference ordering? Suppose you flip a coin 30 times and it comes up with some random-looking string - how do you know this wasn't because a mind wanted it to produce that string?
This, in turn, is reminiscent of the Minimum Message Length formulation of Occam's Razor: if you send me a message telling me what a mind wants and how powerful it is, then this should enable you to compress your description of future events and observations, so that the total message is shorter. Otherwise there is no predictive benefit to viewing a system as an optimization process. This criterion tells us when to take the intentional stance.
(3) Actually, you need to fit another criterion to take the intentional stance - there can't be a better description that averts the need to talk about optimization. This is an epistemic criterion more than a physical one - a sufficiently powerful mind might have no need to take the intentional stance toward a human, because it could just model the regularity of our brains like moving parts in a machine.
(4) If you have a coin that always comes up heads, there's no need to say "The coin always wants to come up heads" because you can just say "the coin always comes up heads". Optimization will beat alternative mechanical explanations when our ability to perturb a system defeats our ability to predict its interim steps in detail, but not our ability to predict a narrow final outcome. (Again, note that this is an epistemic criterion.)
(5) Suppose you believe a mind exists, but you don't know its preferences? Then you use some of your evidence to infer the mind's preference ordering, and then use the inferred preferences to infer the mind's power, then use those two beliefs to testably predict future outcomes. The total gain in predictive accuracy should exceed the complexity-cost of supposing that "there's a mind of unknown preferences around", the initial hypothesis.
Similarly, if you're not sure whether there's an optimizer around, some of your evidence-fuel burns to support the hypothesis that there's an optimizer around, some of your evidence is expended to infer its target, and some of your evidence is expended to infer its power. The rest of the evidence should be well explained, or better yet predicted in advance, by this inferred data: this is your revenue on the transaction, which should exceed the costs just incurred, making an epistemic profit.
(6) If you presume that you know (from a superior epistemic vantage point) the probabilistic consequences of an action or plan, or if you measure the consequences repeatedly, and if you know or infer a utility function rather than just a preference ordering, then you might be able to talk about the degree of optimization of an action or plan rather than just the negentropy of a final outcome. We talk about the degree to which a plan has "improbably" high expected utility, relative to a measure over the space of all possible plans.
(7) A similar presumption that we can measure the instrumental value of a device, relative to a terminal utility function, lets us talk about a Toyota Corolla as an "optimized" physical object, even though we attach little terminal value to it per se.
(8) If you're a human yourself and you take the measure of a problem, then there may be "obvious" solutions that don't count for much in your view, even though the solution might be very hard for a chimpanzee to find, or a snail. Roughly, because your own mind is efficient enough to calculate the solution without an apparent expenditure of internal effort, a solution that good will seem to have high probability, and so an equally good solution will not seem very improbable.
By presuming a base level of intelligence, we measure the improbability of a solution that "would take us some effort", rather than the improbability of the same solution emerging from a random noise generator. This is one reason why many people say things like "There has been no progress in AI; machines still aren't intelligent at all." There are legitimate abilities that modern algorithms entirely lack, but mostly what they're seeing is that AI is "dumber than a village idiot" - it doesn't even do as well as the "obvious" solutions that get most of the human's intuitive measure, let alone surprisingly better than that; it seems anti-intelligent, stupid.
To measure the impressiveness of a solution to a human, you've got to do a few things that are a bit more complicated than just measuring optimization power. For example, if a human sees an obvious computer program to compute many solutions, they will measure the total impressiveness of all the solutions as being no more than the impressiveness of writing the computer program - but from the internal perspective of the computer program, it might seem to be making a metaphorical effort on each additional occasion. From the perspective of Deep Blue's programmers, Deep Blue is a one-time optimization cost; from Deep Blue's perspective it has to optimize each chess game individually.
To measure human impressiveness you have to talk quite a bit about humans - how humans compact the search space, the meta-level on which humans approach a problem. People who try to take human impressiveness as their primitive measure will run into difficulties, because in fact the measure is not very primitive.
(9) For the vast majority of real-world problems we will not be able to calculate exact optimization powers, any more than we can do actual Bayesian updating over all hypotheses, or actual expected utility maximization in our planning. But, just like Bayesian updating or expected utility maximization, the notion of optimization power does give us a gold standard against which to measure - a simple mathematical idea of what we are trying to do whenever we essay more complicated and efficient algorithms.
(10) "Intelligence" is efficient cross-domain optimization.
When I started writing this comment I was confused. Then I got myself fairly less confused I think. I am going to say a bunch of things to explain my confusion, how I tried to get less confused, and then I will ask a couple questions. This comment got really long, and I may decide that it should be a post instead.
Take a system X with 8 possible states. Imagine X is like a simplified Rubik's cube type puzzle. (Thinking about mechanical Rubik's cube solvers is how I originally got confused, but using actual Rubik's cubes to explain would make the math harder.) Suppose I want to measure the optimization power of two different optimizers that optimize X, and share the following preference ordering:
x1∼x2∼x3∼x4∼x5∼x6<x7<x8
When I let optimizer1 operate on X, optimizer1 always leaves X=x8. So on the first time I give optimizer1 X I get:
OP=log2(8/1)=3
If I give X to optimizer1 a second time I get:
OP(X1)=log2(8/1)=3
OP(X2)=log2(8/1)=3
OP=log2(64/1)=OP(X1)+OP(X2)=6
This seems a bit weird to me. If we are imagining a mechanical robot with a camera that solves a Rubik's cube like puzzle, it seems weird to say that the solver gets stronger if I let it operate on the puzzle twice. I guess this would make sense for a measure of optimization pressure exerted instead of a measure of the power of the system, but that doesn't seem to be what the post was going for exactly. I guess we could fix this by dividing by the number of times we give optimizer1 X, and then we would get 3 no matter how many times we let optimizer1 operate on X. This would avoid the weird result that a mechanical puzzle solver gets more powerful the more times we let it operate on the puzzle.
Say that when I let optimizer2 operate on X, it leaves X=x7 with probability p, and leaves X=x8 with probability 1−p, but I do not know p. If I let optimizer2 operate on X one time, and I observe X=x7, I get:
OP=log2(8/2)=2
If I let optimizer2 operate on X three times, and I observe X1=x7, X2=x7, X3=x8, then I get:
OP(X1)=log2(8/2)=2
OP(X2)=log2(8/2)=2
OP(X3)=log2(8/1)=3
OP=log2(512/4)=OP(X1)+OP(X2)+OP(X3)=7
Now we could use the same trick we used before and divide by the number of instances on which optimizer2 was allowed to exert optimization pressure, and this would give us 7/3. The thing is though that we do not know p and it seems like p is relevant to how strong optimizer2 is. We can estimate p to be 2/5 using Laplace's rule, but it might be that the long run frequency of times that optimizer2 leaves X=x8 is actually .9999 and we just got unlucky. (I'm not a frequentist, long run frequency just seemed like the closest concept. Feel free to replace "long run frequency" with the prob a solomonoff bot using the correct language assigns at the limit, or anything else reasonable.) If the long run frequency is in fact that large, then it seems like we are underestimating the power of optimizer2 just because we got a bad sample of its performance. The higher p is the more we are underestimating optimizer2 when we measure its power from these observations.
So it seems then like there is another thing that we need to know besides the preference ordering of an optimizer, the measure over the target system in the absence of optimization, and the observed state of the target system, in order to perfectly measure the optimization power of an optimizer. In this case, it seems like we need to know p. This is a pretty easy fix, we can just take the expectation of the optimization power as originally defined with respect to the probability of observing that state when the optimizer is present, but it is seem more complicated, and it is different.
With o being the observed outcome, Ubeing the utility function of the optimization process, and P being the distribution over outcomes in the absence of optimization, I took the definition in the original post to be:
log2(∑i∈{A|U(Ai)≥U(o)}P(Ai))
The definition I am proposing instead is:
EP(o|optimizer)[log2(∑i∈{A|U(Ai)≥U(o)}P(Ai|∼optimizer))]
That is, you take the expectation of the original measure with respect to the distribution over outcomes you expect to observe in the presence of optimization. We could then call the original measure "optimization pressure exerted", and the second measure optimization power. For systems that are only allowed to optimize once, like humans, these values are very similar; for systems that might exert their full optimization power on several occasions depending on circumstance, like Rubik's cube solvers, these values will be different insofar as the system is allowed to optimize several times. We can think of the first measure as measuring the actual amount of optimization pressure that was exerted on the target system on a particular instance, and we can think of the second measure as the expected amount of optimization pressure that the optimizer exerts on the target system.
To hammer the point home, there is the amount of optimization pressure that I in fact exerted on the universe this time around. Say it was a trillion bits. Then there is the expected amount of optimization pressure that I exert on the universe in a given life. Maybe I just got lucky (or unlucky) on this go around. It could be that if you reran the universe from the point at which I was born several times while varying some things that seem irrelevant, I would on average only increase the negentropy of variables I care about by a million bits. If that were the case, then using the amount of optimization pressure that I exerted on this go around as an estimate of my optimization power in general would be a huge underestimate.
Ok, so what's up here? This seems like an easy thing to notice, and I'm sure Eliezer noticed it.
Eliezer talks about how from the perspective of deep blue, it is exerting optimization pressure every time it plays a game, but from the perspective of the programmers, creating deep blue was a one time optimization cost. Is that a different way to cache out the same thing? It still seems weird to me to say that the more times deep blue plays chess, the higher its optimization power is. It does not seem weird to me to say that the more times a human plays chess, the higher its optimization power is. Each chess game is a subsystem of the target system of that human, eg, the environment over time. Whereas it does seem weird to me to say that if you uploaded my brain and let my brain operate on the same universe 100 times, that the optimization power of my uploaded brain would be 100 times greater than if you only did this once.
This is a consequence of one of the nice properties of Eliezer's measure: OP sums for independent systems. It makes sense that if I think an optimizer is optimizing two independent systems, then when I measure their OP with respect to the first system and add it to their OP with respect to the second, I should get the same answer I would if I were treating the two systems jointly as one system. The Rubik's cube the first time I give it to a mechanical Rubik's cube solver, and the second time I give it to a mechanical Rubik's cube solver are in fact two such independent systems. So are the first time you simulate the universe after my birth and the second time. It makes sense to me that you should sum my optimization power for independent parts of the universe in a particular go around should sum to my optimization power with respect to the two systems taken jointly as one, but it doesn't make sense to me that you should just add the optimization pressure I exert on each go to get my total optimization power. Does the measure I propose here actually sum nicely with respect to independent systems? It seems like it might, but I'm not sure.
Is this just the same as Eliezer's proposal for measuring optimization power for mixed outcomes? Seems pretty different, but maybe it isn't. Maybe this is another way to extend optimization power to mixed outcomes? It does take into account that the agent might not take an action that guarantees an outcome with certainty.
Is there some way that I am confused or missing something in the original post that it seems like I am not aware of?