Here's an easy fix:
Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.
Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it's more about being able to solve unexpected subproblems.
My thanks to everyone who has provided feedback on these drafts so far. It's been helpful, and I've been incorporating your suggestions into the document. Now, Iinvite your feedback on these two snippets from the forthcoming Friendly AI FAQ. For references, see here.
_____
1.10. What is general intelligence?
There are many competing definitions and theories of intelligence (Davidson & Kemp 2011; Niu & Brass 2011; Legg & Hutter 2007), and the term has seen its share of emotionally-laden controversy (Halpern et al. 2011; Daley & Onwuegbuzie 2011).
Legg (2008) collects dozens of definitions of intelligence, and finds that they loosely converge on the following idea:
That will be our ‘working definition’ for intelligence in this FAQ.
There is a sense in which famous computers like Deep Blue and Watson are “intelligent.” They can out-perform human competitors for a narrow range of goals (winning chess games or answers Jeopardy! questions), in a narrow range of environments. But drop them in a novel environment — a shallow pond or a New York taxicab — and they are dumb and helpless. In this sense their “intelligence” is not general.
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments, including ones never before encountered. That is, after all, how humans came to dominate all the land and air on Earth, and what empowers us to explore more extreme environments — like the deep sea or outer space — when we choose to. Humans have invented languages, developed agriculture, domesticated other animals, created crafts and arts and architecture, written philosophy, explored the planet, discovered math and science, evolved new political and economic systems, built machines, developed medicine, and made plans for the distant future.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson. Apes, dolphins, elephants, and a few species of bird have demonstrated some ability to solve novel problems in novel environments (Zentall 2011).
General intelligence in a machine is called artificial general intelligence (AGI). Nobody has developed AGI yet, though many approaches are being attempted. Goertzel & Pennachin (2007) provides an overview of approaches to AGI.
1.11. What is greater-than-human intelligence?
Humans gained dominance over Earth not because we had superior strength, speed, or durability, but because we had superior intelligence. It is our intelligence that makes us powerful. It is our intelligence that allows us to adapt to new environments. It is our intelligence that allows us to subdue animals or invent machines that surpass us in strength, speed, durability and other qualities.
Humans do not operate at anywhere near the upper physical limit of general intelligence. Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization. But our intelligence is still running on a mess of evolved mammalian modules built of meat. Our neurons communicate much slower than electric circuits. Our thinking is hobbled by comprehensive and deep-seated cognitive biases (Gilovich et al. 2002).
It is easy to create machines that surpass our cognitive abilities in narrow domains (chess, etc.), and easy to imagine the creation of machines that eventually surpass our cognitive abilities in a general way. A greater-than-human machine intelligence would exhibit over us the kind of superiority we exhibit over our ancestors in the genus Homo, or chimpanzees, or dogs, or even snails.
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:
Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
He also advances an argument for the conclusion that upon reaching human-level general intelligence, machines can be improved to reach greater-than-human intelligence: the extensibility argument (see section 7.5).
We can also get a sense of how human cognition might be surpassed by examining the limits of human cognition. These include:
With greater scale, a computer could far surpass human capacities for short-term memory, long-term memory, processing speed, and much more.
Thus, it seems that greater-than-human intelligence is possible for a long list of reasons.