Previously in seriesMeasuring Optimization Power

Is Deep Blue "intelligent"?  It was powerful enough at optimizing chess boards to defeat Kasparov, perhaps the most skilled chess player humanity has ever fielded.

A bee builds hives, and a beaver builds dams; but a bee doesn't build dams and a beaver doesn't build hives.  A human, watching, thinks, "Oh, I see how to do it" and goes on to build a dam using a honeycomb structure for extra strength.

Deep Blue, like the bee and the beaver, never ventured outside the narrow domain that it itself was optimized over.

There are no-free-lunch theorems showing that you can't have a truly general intelligence that optimizes in all possible universes (the vast majority of which are maximum-entropy heat baths).  And even practically speaking, human beings are better at throwing spears than, say, writing computer programs.

But humans are much more cross-domain than bees, beavers, or Deep Blue.  We might even conceivably be able to comprehend the halting behavior of every Turing machine up to 10 states, though I doubt it goes much higher than that.

Every mind operates in some domain, but the domain that humans operate in isn't "the savanna" but something more like "not too complicated processes in low-entropy lawful universes".  We learn whole new domains by observation, in the same way that a beaver might learn to chew a different kind of wood.  If I could write out your prior, I could describe more exactly the universes in which you operate.

Is evolution intelligent?  It operates across domains - not quite as well as humans do, but with the same general ability to do consequentialist optimization on causal sequences that wend through widely different domains.  It built the bee.  It built the beaver.

Whatever begins with genes, and impacts inclusive genetic fitness, through any chain of cause and effect in any domain, is subject to evolutionary optimization.  That much is true.

But evolution only achieves this by running millions of actual experiments in which the causal chains are actually played out.  This is incredibly inefficient.  Cynthia Kenyon said, "One grad student can do things in an hour that evolution could not do in a billion years."  This is not because the grad student does quadrillions of detailed thought experiments in their imagination, but because the grad student abstracts over the search space.

By human standards, evolution is unbelievably stupid.  It is the degenerate case of design with intelligence equal to zero, as befitting the accidentally occurring optimization process that got the whole thing started in the first place.

(As for saying that "evolution built humans, therefore it is efficient", this is, firstly, a sophomoric objection; second, it confuses levels.  Deep Blue's programmers were not superhuman chessplayers.  The importance of distinguishing levels can be seen from the point that humans are efficiently optimizing human goals, which are not the same as evolution's goal of inclusive genetic fitness.  Evolution, in producing humans, may have entirely doomed DNA.)

I once heard a senior mainstream AI type suggest that we might try to quantify the intelligence of an AI system in terms of its RAM, processing power, and sensory input bandwidth.  This at once reminded me of a quote from Dijkstra:  "If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."  If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used.  Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources.  Intelligence, in other words, is efficient optimization.

So if we say "efficient cross-domain optimization" - is that necessary and sufficient to convey the wisest meaning of "intelligence", after making a proper effort to factor out anthropomorphism in ranking solutions?

I do hereby propose:  "Yes."

Years ago when I was on a panel with Jaron Lanier, he had offered some elaborate argument that no machine could be intelligent, because it was just a machine and to call it "intelligent" was therefore bad poetry, or something along those lines.  Fed up, I finally snapped:  "Do you mean to say that if I write a computer program and that computer program rewrites itself and rewrites itself and builds its own nanotechnology and zips off to Alpha Centauri and builds its own Dyson Sphere, that computer program is not intelligent?"

This, I think, is a core meaning of "intelligence" that it is wise to keep in mind.

I mean, maybe not that exact test.  And it wouldn't be wise to bow too directly to human notions of "impressiveness", because this is what causes people to conclude that a butterfly must have been intelligently designed (they don't see the vast incredibly wasteful trail of trial and error), or that an expected paperclip maximizer is stupid.

But still, intelligences ought to be able to do cool stuff, in a reasonable amount of time using reasonable resources, even if we throw things at them that they haven't seen before, or change the rules of the game (domain) a little.  It is my contention that this is what's captured by the notion of "efficient cross-domain optimization".

Occasionally I hear someone say something along the lines of, "No matter how smart you are, a tiger can still eat you."  Sure, if you get stripped naked and thrown into a pit with no chance to prepare and no prior training, you may be in trouble.  And by similar token, a human can be killed by a large rock dropping on their head.  It doesn't mean a big rock is more powerful than a human.

A large asteroid, falling on Earth, would make an impressive bang.  But if we spot the asteroid, we can try to deflect it through any number of methods.  With enough lead time, a can of black paint will do as well as a nuclear weapon.  And the asteroid itself won't oppose us on our own level - won't try to think of a counterplan.  It won't send out interceptors to block the nuclear weapon.  It won't try to paint the opposite side of itself with more black paint, to keep its current trajectory.  And if we stop that asteroid, the asteroid belt won't send another planet-killer in its place.

We might have to do some work to steer the future out of the unpleasant region it will go to if we do nothing, but the asteroid itself isn't steering the future in any meaningful sense.  It's as simple as water flowing downhill, and if we nudge the asteroid off the path, it won't nudge itself back.

The tiger isn't quite like this.  If you try to run, it will follow you.  If you dodge, it will follow you.  If you try to hide, it will spot you.  If you climb a tree, it will wait beneath.

But if you come back with an armored tank - or maybe just a hunk of poisoned meat - the tiger is out of luck.  You threw something at it that wasn't in the domain it was designed to learn about.  The tiger can't do cross-domain optimization, so all you need to do is give it a little cross-domain nudge and it will spin off its course like a painted asteroid.

Steering the future, not energy or mass, not food or bullets, is the raw currency of conflict and cooperation among agents.  Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns.  And if Kasparov had been allowed to use any means to win against Deep Blue, rather than being artificially restricted, it would have been a trivial matter to kick the computer off the table - a rather light optimization pressure by comparison with Deep Blue's examining hundreds of millions of moves per second, or by comparison with Kasparov's pattern-recognition of the board; but it would have crossed domains into a causal chain that Deep Blue couldn't model and couldn't optimize and couldn't resist.  One bit of optimization pressure is enough to flip a switch that a narrower opponent can't switch back.

A superior general can win with fewer troops, and superior technology can win with a handful of troops.  But even a suitcase nuke requires at least a few kilograms of matter.  If two intelligences of the same level compete with different resources, the battle will usually go to the wealthier.

The same is true, on a deeper level, of efficient designs using different amounts of computing power.  Human beings, five hundred years after the Scientific Revolution, are only just starting to match their wits against the billion-year heritage of biology.  We're vastly faster, it has a vastly longer lead time; after five hundred years and a billion years respectively, the two powers are starting to balance.

But as a measure of intelligence, I think it is better to speak of how well you can use your resources - if we want to talk about raw impact, then we can speak of optimization power directly.

So again I claim that this - computationally-frugal cross-domain future-steering - is the necessary and sufficient meaning that the wise should attach to the word, "intelligence".

New to LessWrong?

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 7:24 AM

Interesting idea... though I still think you're wrong to step away from anthropomorphism, and 'necessary and sufficient' is a phrase that should probably be corralled into the domain of formal logic.

And I'm not sure this adds anything to Sternberg and Salter's definition: 'goal-directed adaptive behavior'.

I'm not an AGI researcher or developer (yet), but I think that the notion of a process steering the future into a constrained region is brilliant. It immediately feels much closer to implementation than any other definitions I've read before. Please continue posting on this topic. What I'm especially looking forward is anything on compression / abstraction over the search space.

Eliezer

One of your best ever posts IMHO and right on the nail. Of course this might be because I already agree with this definition but I left AI years ago, long before blogging and never wrote something like this up nor would I have said it so eloquently if I had.

Thom Blake 'goal-directed adaptive behaviour' is quite different and specifically it does not capture the notion of efficiency - of doing more with less.

Yep, definitely one of Eliezer's best posts.

Eliezer, out of curiosity, what was Lanier's response? Did he bite the bullet and say that wouldn't be an intelligence?

If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources.

This really won't do. I think what you want is think in terms of a production function, which describe a system's output on a particular task as a function of its various inputs and features. Then we can talk about partial derivatives; rates at which output increases as a function of changes in inputs or features. The hard thing here is how to abstract well, how to collapse diverse tasks into similar task aggregates, and how to collapse diverse inputs and features into related input and feature aggregates. In particular, there is the challenge of how to identify which of those inputs or features count as its "intelligence."

IQ is a feature of human brains, and many have studied how the output of humans vary with IQ and task, both when other inputs are in some standard "reasonable" range, and when other inputs vary substantially. Even this is pretty hard; it is not at all clear to me how to broaden these abstractions even further, to talk about the "intelligence" of arbitrary systems on very wide ranges of tasks with wide ranges of other inputs and features.

I love this post, but there are several things in it that I have to take issue with.

You give the example of the asteroid and the tiger as things with different levels of intelligence, making the point that in both cases human beings are able to very rapidly respond with things that are outside the domain under which these entities are able to optimize. In the case of the asteroid, it can't optimize at all. In the case of the tiger, the human is more intelligent and so is better able to optimize over a wider domain.

However, the name of this blog is "overcoming bias," and one of its themes is the areas under which human intelligence breaks down. Just like you can toss poisoned meat to a tiger, there are things you can do to humans with similar effect. You can, for example, play on our strong confirmation bias. You can play tricks on our mind that exploit our poor intuition for statistics. Or, as in the case of Kasparov vs. Deep Blue, you can push us into a domain where we are slower than our opponent. Poisoned meat might do it for a tiger, but are economic bubbles and the "greater fool" pyramid that builds within them any different than this for humans? Aren't opaque mortgage-backed securities like the poisoned meat you toss to the tiger?

Secondly, I often object to the tendency of CS/AI folks to claim that evolution is incredibly slow. Simple iterative mutation and selection is slow and blind, but most of what goes on in evolution is not simple iterative mutation and selection. Evolution has evolved many strategies for evolution-- this is called the evolution of evolvability in the literature. These represent strategies for more efficiently finding local maxima in the fitness landscape under which these evolutionary processes operate. Examples include transposons, sexual reproduction, stress-mediated regulation of mutation rates, the interaction of epigenetics with the Baldwin effect, and many other evolvability strategies. Evolution doesn't just learn... it learns how to learn as well. Is this intelligence? It's obviously not human-like intelligence, but I think it qualifies as a form of very alien intelligence.

Finally, the paragraph about humans coming to parity with nature after 500 years of modern science is silly. Parity implies that there is some sort of conflict or contest going on. We are a part of the natural system. When we build roads and power plants, make vaccines, convert forests to farmland, etc. we are not fighting nature. We are part of nature, so these things are simply nature modifying itself just as it always has. Nuclear power plants are as "natural" as beehives and beaver dams. The "natural" vs. "artificial" dichotomy is actually a hidden form of anthropomorphism. It assumes that we are somehow metaphysically special.

g is a feature of human brains; IQ is a rough ranking of human brains with respect to g. I have not yet read of any means of actually measuring g , has anyone here got any references?

Eliezer, I have been trying to reread your series of ethics and morality posts in order, but am having trouble following the links backwards, I keep "finding" posts I missed. Any chance you could go and link them in the order you think they should be read?

Billswift, re rereading the series, check out Andrew Hay's list and associated graphs.

http://www.google.com/search?hl=en&q=tigers+climb+trees

On a more serious note, you may be interested in Marcus Hutter's 2007 paper "The Loss Rank Principle for Model Selection". It's about modeling, not about action selection, but there's a loss function involved, so there's a pragmatist viewpoint here, too.

Adam_Ierymenko: Evolution has evolved many strategies for evolution-- this is called the evolution of evolvability in the literature. These represent strategies for more efficiently finding local maxima in the fitness landscape under which these evolutionary processes operate. Examples include transposons, sexual reproduction,

Yes, Eliezer_Yudkowsky has discussed this before and calls that optimizaiton at the meta-level. Here is a representative post where he makes those distinctions.

Looking over the history of optimization on Earth up until now, the first step is to conceptually separate the meta level from the object level - separate the structure of optimization from that which is optimized. If you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats. On the meta level we have things like natural selection of asexual populations, and sexual recombination.

A quote from that post: "So animal brains - up until recently - were not major players in the planetary game of optimization; they were pieces but not players."

Again, no mention of sexual selection. Brains are players in the optimisation process. Animal brains get to perform selection directly. Females give male tyres a good kicking before choosing them - a bit like unit testing. Sexual selection is not even the only mechanism - natural selection can do this too.

Adam, you will find above that I contrasted human design to biology, not to nature.

Trying to toss a human a poisoned credit-default swap is more like trying to outrun a tiger or punch it in the nose - it's not an Outside Context Problem where the human simply doesn't understand what you're doing; rather, you're opposing the human on the same level and it can fight back using its own abilities, if it thinks of doing so.

Robin: This really won't do. I think what you want is think in terms of a production function, which describe a system's output on a particular task as a function of its various inputs and features. Then we can talk about partial derivatives; rates at which output increases as a function of changes in inputs or features. The hard thing here is how to abstract well, how to collapse diverse tasks into similar task aggregates, and how to collapse diverse inputs and features into related input and feature aggregates. In particular, there is the challenge of how to identify which of those inputs or features count as its "intelligence."

How do you measure output? As a raw quantity of material? As a narrow region of outcomes of equal or higher preference in an outcome space? Economists generally deal in quantities that are relatively fungible and liquid, but what about when the "output" is a hypothesis, a design for a new pharmaceutical, or an economic rescue plan? You can say "it's worth what people will pay for it" but this just palms off the valuation problem on hedge-funds or other financial actors, which need their own way of measuring the value somehow.

There's also a corresponding problem for complex inputs. As economists, you can to a large extent sit back and let the financial actors figure out how to value things, and you just measure the dollars. But the AI one tries to design is more in the position of actually being a hedge fund - the AI itself has to value resources and value outputs.

Economists tend to measure intermediate tasks that are taken for granted, but one of the key abilities of intelligence is to Jump Out Of The System and trace a different causal pathway to terminal values, eliminating intermediate tasks along the way. How do you measure fulfillment of terminal values if, for example, an AI or economy decides to eliminate money and replace it with something else? We haven't always had money. And if there's no assumption of money, how do you value inputs and outputs?

You run into problems with measuring the improbability of an outcome too, of course; I'm just saying that breaking up the system into subunits with an input-output diagram (which is what I think you're proposing?) is also subject to questions, especially since one of the key activities of creative intelligence is breaking obsolete production diagrams.

With all this talk about poisoned meat and CDSes, I was inspired to draw this comic.

It's interesting that Eliezer ties intelligence so closely to action ("steering the future"). I generally think of intelligence as being inside the mind, with behaviors & outcomes serving as excellent cues to an individual's intelligence (or unintelligence), but not as part of the definition of intelligence. Would Deep Blue no longer be intelligent at chess if it didn't have a human there to move the pieces on the board, or if it didn't signal the next move in a way that was readily intelligible to humans? Is the AI-in-a-box not intelligent until it escapes the box?

Does an intelligent system have to have its own preferences? Or is it enough if it can find the means to the goals (with high optimization power, across domains), wherever the goals come from? Suppose that a machine was set up so that a "user" could spend a bit of time with it, and the machine would figure out enough about the user's goals, and about the rest of the world, to inform the user about a course of action that would be near-optimal according to the user's goals. I'd say it's an intelligent machine, but it's not steering the future toward any particular target in outcome space. You could call it intelligence as problem-solving.

First paragraph

There is only action, or interaction to be precise. It doesn't matter whether we experience the intelligence or not, of course, just that it can be experienced.

Second paragraph

Sure, it could still be intelligent. It's just more intelligent if it's less dependent. The definition includes this since more cross-domain ⇒ less dependence.

Eliezer's comment describes the importance of Jumping Out Of The System, which I attribute to the "cross-domain" aspect of intelligence, but I don't see this defined anywhere in the formula given for intelligence, which so far only covers "efficient" and "optimizer".

First, a quick-and-dirty description of the process: Find an optimization process in domain A (whether or not it help attain goals). Determine one or many mapping functions between domains A and B. Use a mapping to apply the optimization process to achieve a goal in domain B.

I think the heart of crossing domains is in the middle step - the construction of a mapping between domains. Plenty of these mappings will be incomplete, mere projections that lose countless dimensions, but they still occasionally allow for useful portings of optimization processes. This is the same skill as abstraction or generalization: turning data into simplified patterns, turning apples and oranges into numbers all the same. The measure of this power could then be the maximum distance from domain A to domain B that the agent can draw mappings across. Or maybe the maximum possible complexity of a mapping function (or is that the same thing)? Or the number of possible mappings between A and B? Or speed; it just would not do to run through every possible combination of projections between two domains. So here, then, is itself a domain that can be optimized in. Is the measure of being cross-domain just a measure of how efficiently one can optimize in the domain of "mapping between domains"?

Would Deep Blue no longer be intelligent at chess if it didn't have a human there to move the pieces on the board, or if it didn't signal the next move in a way that was readily intelligible to humans?

With no actuators at all, how would you distinguish the intelligence of Deep Blue from that of a heavy inert metal box?

Eliezer, even if you measure output as you propose in terms of a state space reduction factor, my main point was that simply "dividing by the resources used" makes little sense. Yes a production function formulation may abstract from some relevant details, but it is far closer to reality than dividing by "resources." Yes a market economy may help one to group and measure relevant inputs, and without that aid you'll have even more trouble grouping and measuring inputs.

With no actuators at all, how would you distinguish the intelligence of Deep Blue from that of a heavy inert metal box?

One tree near my home is an excellent chess player. If only it had some way to communicate...

I could teach any deciduous tree to play grandmaster chess if only I could communicate with it. (Well, not the really dumb ones.)

I have not yet read of any means of actually measuring g , has anyone here got any references?

There's no way to "actually measure g", because g has no operational definition beyond statistical analyses of IQ.

There have been some attempts to link calculated g with neural transmission speeds and how easily brains can cope with given problems, but there's been little success.

Re resources: if you look at how IQ tests work they usually attempt to factor out resources as best they can - no mechanical aids allowed, fixed time, etc. The only "wealth" you are permitted is what they can't strip off - education, genes, etc.

I imagine computer tests could be a bit like that - e.g. no net connection allowed, but you don't get penalised for having a faster CPU. For example, if you look at computer go contests, people get to supply their own hardware - and the idea is to win. You do not get many bonus points for using an Apple II.

Jeff Hawkins, in his book On Intelligence, says something similar to Eliezer. He says intelligence IS prediction. But Eliezer say intelligence is steering the future, not just predicting it. Steering is a behavior of agency, and if you cannot peer into the source code but only see the behaviors of an agent, then intelligence would necessarily be a measure of steering the future according to preference functions. This is behaviorism is it not? I thought behaviorism had been predicated as a useful field of inquiry in the cognitive sciences?

I can see where Eliezer is going with all this. The most moral/ethical/friendly AGI cannot take orders from any human, let alone be modeled on human agency to a large degree itself, and we also definitely do not want this agency to be a result of the same horrendous process of natural selection red in tooth and claw that created us.

That cancels out an anthropomorphic AI, cancels out evolution through natural selection, and it cancels out an unchecked oracle/genie type wish-granting intelligent system (though I personally feel that a controlled (friendly?) version of the oracle AI is the best option because I am skeptical with regard to Eliezer or anyone else coming up with a formal theory of friendliness imparted on an autonomous agent). ((Can an oracle type AI create a friendly AI agent? Is that a better path towards friendliness?))

Adam's comment above is misplace because I think Eliezer's recursively self-improving friendly intelligence optimization is a type of evolution, just not as blind as natural selection as has been played out through natural history on our earth.

Nice post.

And no, I don't think optimisation processes capture best what we generally mean by intelligence - but it captures best what we should mean by intelligence.

In practice it may have some weaknesses - depending on what exactly the valid domains are, there may be a more informative concept of intelligence in our universe - but it's good enough to do work with, while most definitions of intelligence aren't.

Would it do good to use something like sentience quotient, the quantity of bits per second per kg of matter a system can process, to assess the efficiency of a system ?

Of two systems having the same preferences, and the same sentience quotient, but whose optimization power isn't the same, one must then have a more efficient, smarter way of optimizing than the other ?

As for cross domain optimization, I don't see offhand how to mathematically charaterize different domains - and it is possible to define arbitrary domains anyway I think -

but if you have a nonrandom u, and are niverse, or environment, and are adapted to it, then if following your preferences you want to use all the information available locally in your environment, in your past light cone, you can only predict the course of your actions faster than will the universe given the implementation of physical laws upon matter if you can non destructively compress the information describing that environment; I guess, this works in any universe that has not reached maximal entropy, and the less entropy in that universe, the faster your speed for predicting future events will be compared to the speed of the universe implementing future events.

If you can't do that, then you have to use destructive compression to simplify your information about the environment into something you can manageably use to compute the future state of the universe following your actions, faster than the universe itself would implement them. There's a tradeoff between speed, simplicity, and precision, error rate in this case.

Just my immediate thoughts.

Jeff Hawkins observes that brains are constantly predicting the future.

That's quite consistent with the idea of brains acting as expected utility maximisers.

The brain predicts the future in order to detect if its model of the world needs updating when new sensory data arrives. If the data matches the model - no problem. If the data and the model conflict then the model needs updating.

In the expected utility framework, the brain has to predict the future anyway in order to judge the expected consequences of its actions. All it does is keep the results around for long enough to see if things go as it expected.

I often object to the tendency of CS/AI folks to claim that evolution is incredibly slow.

It is only nucleic evolution which is relatively slow. Cultural evolution has enormously magnified the rate of evolutionary change on the planet. In a century, skyscrapers have appeared out of nowhere, machines have probed the other planets, and the Earth has started to glow at night. Today, we can see evolutionary change taking place in real time - within an an individual's lifespan.

Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns

Were you trying to mix the literal and metaphorical here? Because I think that just his pawns were his pawns :)

I once heard a senior mainstream AI type suggest that we might try to quantify the intelligence of an AI system in terms of its RAM, processing power, and sensory input bandwidth.

Of course - this is correct. An AI system, like intelligence in general, is an algorithm and is thus governed by computational complexity theory and the physics of computation.

This at once reminded me of a quote from Dijkstra: "If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."

Code length is one fundemental quantitative measure (as in kolmogorv complexity), important in information theory, but it is not directly related nor to be confused with primary physical computational quantities such as space and time. Any pattern can be compressed - trading off time for space (expending energy to conserve mass).

If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources. Intelligence, in other words, is efficient optimization.

The computational effeciency of one's intelligence algorithm is important, but effeciency is not the same as power, whether one is talking about heat engines or computational systems. Effeciency in computation is a measure of how much computation you get out for how much matter and energy you put in.

Intelligence, in typical english usage, does not connotate an effeciency measure - it connotates a power measure. If you have a super-computer AI that can think at a 3rd grade level, and you have a tiny cell phone AI that uses 1000 times less resources but thinks at a 2nd grade level, we'd still refer to the super-computer AI as being more intelligent, regardless of how effecient it is.

Intelligence is a computational power measure, but it is not a single scalar - it has temporal and spatial components (speed, depth, breadth, etc).

So if we say "efficient cross-domain optimization" - is that necessary to convey the wisest meaning of "intelligence", after making a proper effort to factor out anthropomorphism in ranking solutions?

I think "powerful generalized optimization" is closer to what you want, but one may also want to distinguish between static and dynamic intelligence (hard-coded vs adaptive). I'd also say that intelligence is a form of optimization, but optimization is a broader term. There are many computational optimization processes, most of which one would be hard pressed to call 'intelligent'.

[-][anonymous]13y00

I once read an interesting book by Kurt Vonnegut called Player Piano that focused on an increasingly automated society (where one was not allowed to have a 'creative' job, for example, unless the punch card that assessed your general intelligence by way of standardized test happened to have the 'creative' slot punched; and where out of work (displaced by robots) auto mechanics pined over the good ole days when they could ply their trade for a living rather than seeing their vocation-which-they-also-had-creative-passion-for become the work of robots.)

In particular, a character (a discontented engineer who goes Luddite and becomes a farmer) has this to say in a climactic letter in the story:

“You perhaps disagree with the antique and vain notion of Man’s being a creation of God. ... But I find it a far more defensible belief than the one implicit in intemperate faith in lawless technological progress—namely, that man is on earth to create more durable and efficient images of himself, and, hence, to eliminate any justification at all for his own continued existence.”

I found this quite interesting and simply wonder what LWers think about this premise. I disagree with the idea that religious faith in God is somehow more defensible than persistent technological progress for the sake of making myself more durable and efficient (and presumably we can extend the quote to include 'rational'). However, I don't necessarily disagree outright that "intemperate faith in lawless technological progress—namely, that man is on earth to create more durable and efficient images of himself, and, hence, to eliminate any justification at all for his own continued existence" -- is not strongly defensible. That phrase at the end ... "eliminate any justification for his own continued existence" is quite resonant.

Might we not be able to define intelligence in this way? Intelligence is a property such that once I have it, then I am personally less on the hook for expending my own resources to sustain my own existence. In a slight sense, intelligence renders its owner less and less necessary for the actual work that goes into self preservation.

The sense in which I think this applies is as follows: suppose my utility function heavily values my perception of sustained existence. But at the same time, at least some component of my utility is derived from perceiving a "sense of meaning" out of life (a concrete definition of that could be debated endlessly, so please lets just go with a basic natural language understanding of that for right now). If I feel that increase in intelligence improves probability of survival but diminishes personal usefulness and "meaning", then it is at least possible that there is some finite largest amount of intelligence (quantified in terms of efficient optimization power if you so choose) such that beyond that intelligence horizon, my perceived quality of life actually decreases because my perceived personal meaning drops low enough to offset the marginal increased assurance of longer(/more comfortable) life.

I think this is a non-trivial theory about intelligence. This "intelligence horizon" may not be something that humans could even begin to encounter, as most of us seem perfectly capable of extracting "meaning" out of life despite technological progress. But this isn't the same as knowing a fundamental reason why all intelligences would always value a unit improvement in longevity over a unit improvement in "personal meaning".

For example, suppose that many billions of years into the future there have taken place several great battles between Bayesian superintelligences forced to hostility over scarcity of resource so severe that even their combined superintelligent efforts at collaboration could not solve the problems. One triumphant superintelligence remains and is confident that the probability it is the last remaining life within its light cone is near 1. Suppose it goes on to self colonize territory and resources until it is confident that the probability that there is additional resource to consume is near zero (here I mean that it believes with a high degree of confidence that it has located all available resource that it can physically access).

Now what does it do? What is the psychology of such a being? I don't even pretend to know a good answer to this sort of question, but I am sure other LWers will have good things to say. But you can clearly see that as this last Bayesian intelligence completes tasks, it directly loses purpose. Unless it does not weigh a sense of purpose into its utility function, this would be problematic. Maybe it would start to intentionally solve problems very slowly so that it required a very long time to finish. Maybe its slowness would increase as time goes on to ensure that it never actually does finish the task?

It'd be cool if, based on knowing something about your light cone, the problems needing solved during your lifetime, and the imbued method by which your utility function assigns weight to a sense of purpose, you could compute an optimum life span... I mean, suppose the universe was the interval [1,N] for some very large N and my utility function under the scenario where I set myself to just optimize indefinitely and make myself more durable and efficient starts looking like 1/x^2 after a long time, say from integer M to N where M is also very large. In order that we can even have an expected utility (which I think is a reasonable assumption) then any utility I choose ought to be rectifiable on [1,N], so no matter what the integral works out to be, I could find utility functions that actually hit zero (assuming non-negative utility, or hitting -max{utility} for bounded but possibly negative utility, or the debate is over if death -> -\infty utility but this doesn't seem plausible given the occurrence of suicide) at a time before N (corresponding to death before N) but for which the total experienced utility is higher.

[Note: this is not an attempt to pin down every rigorous detail about utility functions... just to illustrate the naive, simplistic view that even simple utility scenarios can lead to counter-intuitive decisions. The opportunity to have these sorts of situations would only increase as utility functions become more realistic and are based upon more realistic models of the universe. Imagine if you had access to your own utility function and a Taylor series approximation of the evolving quantum states that will "affect you". If you can compute any kind of meaningful cohesive extrapolated volition, why couldn't you predict your reaction to the predicament of being in a situation like that of Sisyphus and whether or not dying earlier would be better if the overall effect was that your total experienced utility increased? Can there possibly be experiences of finite duration that are so awesome in terms of utility that they outweigh futures in which you are dead? What if you were a chess player and your chess utility function was such that if you could promote 6 pawns to queens that your enjoyment of that bizarre novelty occurring during gameplay was worth way more than winning the game and so even if you saw that the opponent had a forced mate, you'd willingly walk right into it in order to get the sweet sweet novelty of the 6 queens?]

A simple example would be that, when facing the choice between unimaginable torture from which you willingly accept that the probability of escape is near zero or suicide, there could be physically meaningful utility functions (perhaps even 'optimal' utility functions) that would rationally choose suicide. A Bayesian superintelligence might view a solitary light cone existence with no tasks to derive purpose from as equivalent to a form of this sort of torture and thus see that if it just "acts dumber but dies sooner but lives the remaining years with more enjoyable purpose" it will actually have more total utility. Maybe it will shoot itself in its superintelligent foot and erase all memory of the shooting and replace it with some different explanation for why there is a bullet hole. Then the future stupider AI will be in a predicament where it realizes it needs to make itself smarter (up to the former level of intelligence that it erased from its own memory) but is somehow handicapped in a way that it's just barely out of reach (but chasing the ever-out-of-reach carrot gives it meaning). Would it rather be Sisyphus or David Foster Wallace or the third option that I am most likely failing to see?

Addition

I thought of one idea for what the Bayesian intelligence might do after all goals had been achieved for its own self-preservation, as near as it could reckon. The idea is that is might run a simulation resulting in other lifeforms. Would it be fair to say that the probability of it choosing to do this skyrockets to close to 1 as the total number of other living beings goes toward zero? If so, then as per the link above, does this make us more compelled to believe we're currently in the simulation of a last Bayesian intelligence in a significantly mature metaverse (don't tell the Christians)? My gut says probably not but I can't think of concrete compelling reasons for why this can be easily dismissed. I'll feel better if anyone can Swiss-cheesify my line of thinking.

This should be a post, on a personal blog if not on LW.

Intelligence, in other words, is efficient optimization.

IMO, there's already plenty of space to include any efficiency criteria in the function being optimised.

So: there's no need to say "efficient" - but what you do, conventionally, need to say is something about the ability to solve a range of different types of problem. Include that and then consistently inefficient solutions get penalized automatically on problems where efficiency is specified in the utility function.

On the Concerning AI podcast, we addressed the question "What is Intelligence?" and settled on your definition as the one that would work for us. Thank you!

Is this definition circular? Let me explicit why that might be:

You define intelligent agents (or degree of intelligence) as "those physical systems that, using few resources, can optimize future states across many contexts (that is, when put in many different physical contexts, in which they need to face problems in different domains)" (consider also this more formal definition, which ignores the notion of resources).

But how are "resources" defined? There is a very natural notion of resources: energy, or negentropy. But I'm not sure this captures everything we care about here. There are many situations in which we care about the shape in which this energy is presented, since that is important to the useful usage of it (imagine finding a diamond in the desert, when you are dying of thirst). There are some ways in which energy can be stored (for example, in the nucleus of atoms?) which are very hard to make use of (without loads of other energy in other forms already available to take advantage of it). So we might naturally say something like "energy is that which, across many contexts, can be used to optimize for future states (across many preference orderings)". But what do we mean here by "can be used"? Probably something of the form "certain kinds of physical systems can take advantage of them, and transform them into optimization of future states". And I don't see how to define those physical systems other than as "intelligent agents".

A possible objection to this is that ultimately, when dealing with arbitrarily high "intelligence" and/or "variety of sources of energy available", we will have no problem with some energy sources being locked away from us: we have so many methods and machinery already available, that for any given source we can construct the necessary mechanisms to extract it. And so we don't need to resource to the definition of "resource" above, and can just use the natural and objective "energy" or "negentropy". But even if that's the case, I think for any finitely intelligent agent, with access to any finitely many energy sources, there will be some energy source it's locked away from. And even if that weren't the case for all such finite quantities, I do think in the intelligence regimes we care about many energy sources remain locked away.

You also seem to gesture at resources being equivalent to computation used (by the agent). Maybe we could understand this to mean something about the agent, and not its relationship to its surroundings (like the surrounding negentropy), such as "the agent is instantiated using at most such memory". But I'm not sure we can well-define this in any way other than "it fits in this small physical box". And then we get degenerations of the form "the most intelligent systems are black holes", or other expanding low-level physical processes.