Previously in series: Measuring Optimization Power
Is Deep Blue "intelligent"? It was powerful enough at optimizing chess boards to defeat Kasparov, perhaps the most skilled chess player humanity has ever fielded.
A bee builds hives, and a beaver builds dams; but a bee doesn't build dams and a beaver doesn't build hives. A human, watching, thinks, "Oh, I see how to do it" and goes on to build a dam using a honeycomb structure for extra strength.
Deep Blue, like the bee and the beaver, never ventured outside the narrow domain that it itself was optimized over.
There are no-free-lunch theorems showing that you can't have a truly general intelligence that optimizes in all possible universes (the vast majority of which are maximum-entropy heat baths). And even practically speaking, human beings are better at throwing spears than, say, writing computer programs.
But humans are much more cross-domain than bees, beavers, or Deep Blue. We might even conceivably be able to comprehend the halting behavior of every Turing machine up to 10 states, though I doubt it goes much higher than that.
Every mind operates in some domain, but the domain that humans operate in isn't "the savanna" but something more like "not too complicated processes in low-entropy lawful universes". We learn whole new domains by observation, in the same way that a beaver might learn to chew a different kind of wood. If I could write out your prior, I could describe more exactly the universes in which you operate.
Is evolution intelligent? It operates across domains - not quite as well as humans do, but with the same general ability to do consequentialist optimization on causal sequences that wend through widely different domains. It built the bee. It built the beaver.
Whatever begins with genes, and impacts inclusive genetic fitness, through any chain of cause and effect in any domain, is subject to evolutionary optimization. That much is true.
But evolution only achieves this by running millions of actual experiments in which the causal chains are actually played out. This is incredibly inefficient. Cynthia Kenyon said, "One grad student can do things in an hour that evolution could not do in a billion years." This is not because the grad student does quadrillions of detailed thought experiments in their imagination, but because the grad student abstracts over the search space.
By human standards, evolution is unbelievably stupid. It is the degenerate case of design with intelligence equal to zero, as befitting the accidentally occurring optimization process that got the whole thing started in the first place.
(As for saying that "evolution built humans, therefore it is efficient", this is, firstly, a sophomoric objection; second, it confuses levels. Deep Blue's programmers were not superhuman chessplayers. The importance of distinguishing levels can be seen from the point that humans are efficiently optimizing human goals, which are not the same as evolution's goal of inclusive genetic fitness. Evolution, in producing humans, may have entirely doomed DNA.)
I once heard a senior mainstream AI type suggest that we might try to quantify the intelligence of an AI system in terms of its RAM, processing power, and sensory input bandwidth. This at once reminded me of a quote from Dijkstra: "If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger." If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources. Intelligence, in other words, is efficient optimization.
So if we say "efficient cross-domain optimization" - is that necessary and sufficient to convey the wisest meaning of "intelligence", after making a proper effort to factor out anthropomorphism in ranking solutions?
I do hereby propose: "Yes."
Years ago when I was on a panel with Jaron Lanier, he had offered some elaborate argument that no machine could be intelligent, because it was just a machine and to call it "intelligent" was therefore bad poetry, or something along those lines. Fed up, I finally snapped: "Do you mean to say that if I write a computer program and that computer program rewrites itself and rewrites itself and builds its own nanotechnology and zips off to Alpha Centauri and builds its own Dyson Sphere, that computer program is not intelligent?"
This, I think, is a core meaning of "intelligence" that it is wise to keep in mind.
I mean, maybe not that exact test. And it wouldn't be wise to bow too directly to human notions of "impressiveness", because this is what causes people to conclude that a butterfly must have been intelligently designed (they don't see the vast incredibly wasteful trail of trial and error), or that an expected paperclip maximizer is stupid.
But still, intelligences ought to be able to do cool stuff, in a reasonable amount of time using reasonable resources, even if we throw things at them that they haven't seen before, or change the rules of the game (domain) a little. It is my contention that this is what's captured by the notion of "efficient cross-domain optimization".
Occasionally I hear someone say something along the lines of, "No matter how smart you are, a tiger can still eat you." Sure, if you get stripped naked and thrown into a pit with no chance to prepare and no prior training, you may be in trouble. And by similar token, a human can be killed by a large rock dropping on their head. It doesn't mean a big rock is more powerful than a human.
A large asteroid, falling on Earth, would make an impressive bang. But if we spot the asteroid, we can try to deflect it through any number of methods. With enough lead time, a can of black paint will do as well as a nuclear weapon. And the asteroid itself won't oppose us on our own level - won't try to think of a counterplan. It won't send out interceptors to block the nuclear weapon. It won't try to paint the opposite side of itself with more black paint, to keep its current trajectory. And if we stop that asteroid, the asteroid belt won't send another planet-killer in its place.
We might have to do some work to steer the future out of the unpleasant region it will go to if we do nothing, but the asteroid itself isn't steering the future in any meaningful sense. It's as simple as water flowing downhill, and if we nudge the asteroid off the path, it won't nudge itself back.
The tiger isn't quite like this. If you try to run, it will follow you. If you dodge, it will follow you. If you try to hide, it will spot you. If you climb a tree, it will wait beneath.
But if you come back with an armored tank - or maybe just a hunk of poisoned meat - the tiger is out of luck. You threw something at it that wasn't in the domain it was designed to learn about. The tiger can't do cross-domain optimization, so all you need to do is give it a little cross-domain nudge and it will spin off its course like a painted asteroid.
Steering the future, not energy or mass, not food or bullets, is the raw currency of conflict and cooperation among agents. Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns. And if Kasparov had been allowed to use any means to win against Deep Blue, rather than being artificially restricted, it would have been a trivial matter to kick the computer off the table - a rather light optimization pressure by comparison with Deep Blue's examining hundreds of millions of moves per second, or by comparison with Kasparov's pattern-recognition of the board; but it would have crossed domains into a causal chain that Deep Blue couldn't model and couldn't optimize and couldn't resist. One bit of optimization pressure is enough to flip a switch that a narrower opponent can't switch back.
A superior general can win with fewer troops, and superior technology can win with a handful of troops. But even a suitcase nuke requires at least a few kilograms of matter. If two intelligences of the same level compete with different resources, the battle will usually go to the wealthier.
The same is true, on a deeper level, of efficient designs using different amounts of computing power. Human beings, five hundred years after the Scientific Revolution, are only just starting to match their wits against the billion-year heritage of biology. We're vastly faster, it has a vastly longer lead time; after five hundred years and a billion years respectively, the two powers are starting to balance.
But as a measure of intelligence, I think it is better to speak of how well you can use your resources - if we want to talk about raw impact, then we can speak of optimization power directly.
So again I claim that this - computationally-frugal cross-domain future-steering - is the necessary and sufficient meaning that the wise should attach to the word, "intelligence".
IMO, there's already plenty of space to include any efficiency criteria in the function being optimised.
So: there's no need to say "efficient" - but what you do, conventionally, need to say is something about the ability to solve a range of different types of problem. Include that and then consistently inefficient solutions get penalized automatically on problems where efficiency is specified in the utility function.