Intelligence measures an agent’s ability to achieve goals in a wide range of environments...
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments
Too many uses of "wide range of environments."
Humans have invented languages...explored the planet...evolved new political and economic systems...
The origin of language is contentious and the range of opinions include some that have it occurring almost as naturally as a frog's hop. Better to leave it out.
Exploration is even less impressive. Rats are arguably more curious and have explored more places.
Political and economic systems, particularly non-failed ones, weren't planned and aren't even very well understood.
approaches are being attempted
"Tried" is much more common as a verb that steps away from the path metaphor. Other common verbs here, like "taken," fit it. "Attempted" just seems jarring to me.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson.
Speed is only one very important difference between narrow AIs and weak NGIs, quality of the best solution found is another.
"Some other animals also have more general intelligence than Deep Blue and Watson, though their solutions to problems are much further from optimal and they reach them more slowly than specialist narrow AIs."
Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization.
"We aren't able to integrate animals somewhat less intelligent than us, such as our chimpanzee relatives, into technological civilization. Considering the enormous room there is for improvement on human intelligence, an interesting perspective is to think of ourselves as among the dumbest possible creatures capable of developing a technological civilization."
Or take that line out.
But our intelligence is still running on a mess of evolved mammalian modules built of meat.
"Our intelligence is still running on a mess of evolved mammalian modules built of meat, not evolved simply to maximize intelligence but to use few resources and solve problems found in the early evolutionary environment. Most (?) of the brain modules we use for general intelligence didn't originally evolve to specialize at it, and are instead optimized for other tasks."
But Chalmers (2010) points out that their arguments are irrelevant:
A bit strong. "Some contend that...But Chalmers (2010) argues that their objections are irrelevant:" That's logically weaker, but maybe more manipulative in a dark arts sense, if it's not legitimate to frame the thesis "AGI is possible" as one to be assumed unless a compelling objection is made.
communicate much slower
"more slowly"
and thereby know everything about its own operation and how to improve itself.
The problem here is that it isn't grammatically clear that "everything" does not also apply to "how to improve itself."
Limited sensory data
Add bats.
Mention somewhere: http://en.wikipedia.org/wiki/Moravec%27s_paradox
My thanks to everyone who has provided feedback on these drafts so far. It's been helpful, and I've been incorporating your suggestions into the document. Now, Iinvite your feedback on these two snippets from the forthcoming Friendly AI FAQ. For references, see here.
_____
1.10. What is general intelligence?
There are many competing definitions and theories of intelligence (Davidson & Kemp 2011; Niu & Brass 2011; Legg & Hutter 2007), and the term has seen its share of emotionally-laden controversy (Halpern et al. 2011; Daley & Onwuegbuzie 2011).
Legg (2008) collects dozens of definitions of intelligence, and finds that they loosely converge on the following idea:
That will be our ‘working definition’ for intelligence in this FAQ.
There is a sense in which famous computers like Deep Blue and Watson are “intelligent.” They can out-perform human competitors for a narrow range of goals (winning chess games or answers Jeopardy! questions), in a narrow range of environments. But drop them in a novel environment — a shallow pond or a New York taxicab — and they are dumb and helpless. In this sense their “intelligence” is not general.
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments, including ones never before encountered. That is, after all, how humans came to dominate all the land and air on Earth, and what empowers us to explore more extreme environments — like the deep sea or outer space — when we choose to. Humans have invented languages, developed agriculture, domesticated other animals, created crafts and arts and architecture, written philosophy, explored the planet, discovered math and science, evolved new political and economic systems, built machines, developed medicine, and made plans for the distant future.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson. Apes, dolphins, elephants, and a few species of bird have demonstrated some ability to solve novel problems in novel environments (Zentall 2011).
General intelligence in a machine is called artificial general intelligence (AGI). Nobody has developed AGI yet, though many approaches are being attempted. Goertzel & Pennachin (2007) provides an overview of approaches to AGI.
1.11. What is greater-than-human intelligence?
Humans gained dominance over Earth not because we had superior strength, speed, or durability, but because we had superior intelligence. It is our intelligence that makes us powerful. It is our intelligence that allows us to adapt to new environments. It is our intelligence that allows us to subdue animals or invent machines that surpass us in strength, speed, durability and other qualities.
Humans do not operate at anywhere near the upper physical limit of general intelligence. Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization. But our intelligence is still running on a mess of evolved mammalian modules built of meat. Our neurons communicate much slower than electric circuits. Our thinking is hobbled by comprehensive and deep-seated cognitive biases (Gilovich et al. 2002).
It is easy to create machines that surpass our cognitive abilities in narrow domains (chess, etc.), and easy to imagine the creation of machines that eventually surpass our cognitive abilities in a general way. A greater-than-human machine intelligence would exhibit over us the kind of superiority we exhibit over our ancestors in the genus Homo, or chimpanzees, or dogs, or even snails.
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:
Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
He also advances an argument for the conclusion that upon reaching human-level general intelligence, machines can be improved to reach greater-than-human intelligence: the extensibility argument (see section 7.5).
We can also get a sense of how human cognition might be surpassed by examining the limits of human cognition. These include:
With greater scale, a computer could far surpass human capacities for short-term memory, long-term memory, processing speed, and much more.
Thus, it seems that greater-than-human intelligence is possible for a long list of reasons.