Here's another interesting set of quotes; are we even correct in assuming the most recent percent of DNA matters much? After all, chimps outperform humans in some areas like monkey ladder. From "If a Lion Could Talk":
"Giving a blind person a written IQ test is obviously not a very mean meaningful evaluation of his mental abilities. Yet that is exactly what many cross-species intelligence tests have done. Monkeys, for example, were found not only to learn visual discrimination tasks but to improve over a series of such tasks -- they formed a learning set, a general concept of the problem that betokened a higher cognitive process than a simple association. Rats given the same tasks showed difficulty in mastering the problems and no ability to form a learning set. The obvious conclusion was that monkeys are smarter than rats, a conclusion that was comfortably accepted, as it fit well with our preexisting prejudices about the distribution of general intelligence in nature. But when the rat experiments were repeated, only this time the rats were given the task of discriminating different smells, they learned quickly and showed rapid improvement on subsequent problems, just as the monkeys did.
The problem of motivation is another major confounding variable. Sometimes we may think we are testing an animal's brain when we are only testing its stomach. For example, in a series of studies goldfish never learned to improve their performance when challenged with "reversal" tasks. These are experiments in which an animal is trained to pick one of two alternative stimuli (a black panel versus a white panel, say) in order to obtain a food reward; the correct answer is then switched and the subject has to relearn which one to pick. Rats quickly learned to switch their response when the previously rewarded answer no longer worked. Fish didn't. This certainly fit comfortably with everyone's sense that fish are dumber than rats. But when the experiment was repeated with a different food reward (a paste squirted into the tank right where the fish made its correct choice, as opposed to pellets dropped into the back of the tank), lo and behold the goldfish suddenly did start improving on reversal tasks. Other seemingly fundamental learning differences between fish and rodents likewise vanished when the experiments were redesigned to take into account differences in motivation.
Equalizing motivation is an almost insoluble problem for designers of experiments. Are three goldfish pellets the equivalent of one banana or fifteen bird seeds? How could we even know? We would somehow have to enter into the internal being of different animals to know for sure, and if we could do that we would not need to be devising roundabout experiments to probe their mental processes in the first place.
When we do control for all of the confounding variables that we possibly can, the striking thing about the "pure" cognitive differences that remain is how the similarities in performance between different animals given similar problems vastly outweigh the differences. To be sure, there seems to be little doubt that chimpanzees can learn new associations with a single reinforced trial, and that that is genuinely faster than other mammals or pigeons do it. Monkeys and apes also learn lists faster than pigeons do. Apes and monkeys seem to have a faster and more accurate grasp of numerosity judgments than birds do. The ability to manipulate spatial information appears to be greater in apes than in monkeys.
But again and again experiments have shown that many abilities thought the sole province of "higher" primates can be taught, with patience, to pigeons or other animals. Supposedly superior rhesus monkeys did better than the less advanced cebus monkeys in a visual learning-set problem using colored objects. Then it turned out that the cebus monkeys did better than the rhesus monkeys when gray objects were used. Rats were believed to have superior abilities to pigeons in remembering locations in a radial maze. But after relatively small changes in the procedure and the apparatus, pigeons did just as well.
If such experiments had shown, say, that monkeys can learn lists of forty-five items but pigeons can only learn two, we would probably be convinced that there are some absolute differences in mental machinery between the two species. But the absolute differences are far narrower. Pigeons appear to differ from baboons and people in the way they go about solving problems that involve matching up two images that have been rotated one from the other, but they still get the right answers. They essentially do just as well as monkeys in categorizing slides of birds or fish or other things. Euan Macphail's review of the literature led him to conclude that when it comes to the things that can be honestly called general intelligence, no convincing differences, either qualitative or quantitative, have yet been demonstrated between vertebrate species. While few cognitive researchers would go quite so far -- and in deed we will encounter a number of examples of differences in mental abilities between species that are hard to explain as anything but a fundamental difference in cognitive function -- it is striking how small those differences are, far smaller than "common sense" generally has it. Macphail has suggested that the "no-difference" stance should be taken as a "null hypothesis" in all studies of comparative intelligence; that is, it is an alternative that always has to be considered and ought to be assumed to be the case unless proven otherwise."
EDIT: I've added this and some other points to my Evolutionary drug heuristics article.
Information is power. But how much power? This question is vital when considering the speed and the limits of post-singularity development. To address this question, consider 2 other domains in which information accumulates, and is translated into an ability to solve problems: Evolution, and science.
DNA Evolution
Genes code for proteins. Proteins are composed of modules called "domains"; a protein contains from 1 to dozens of domains. We classify genes into gene "families", which can be loosely defined as sets of genes that on average share >25% of their amino acid sequence and have a good alignment for >75% of their length. The number of genes and gene families known doubles every 28 months; but most "new" genes code for proteins that recombine previously-known domains in different orders.
Almost all of the information content of a genome resides in the amino-acid sequence of its domains; the rest mostly indicates what order to use domains in individual genes, and how genes regulate other genes. About 64% of domains (and 84% of those found in eukaryotes) evolved before eukaryotes split from prokaryotes about 2 billion years ago. (Michael Levitt, PNAS July 7 2009, "Nature of the protein universe"; D. Yooseph et al. "The Sorcerer II global ocean sampling expedition", PLoS Bio 5:e16.) (Prokaryotes are single-celled organisms lacking a nucleus, mitochondria, or gene introns. All multicellular organisms are eukaryotes.)
It's therefore accurate to say that most of the information generated by evolution was produced in the first one or two billion years; the development of more-complex organisms seems to have nearly stopped evolution of protein domains. (Multi-cellular organisms are much larger and live much longer; therefore there are many orders of magnitude fewer opportunities for selection in a given time period.) Similarly, most evolution within eukaryotes seems to have occurred during a period of about 50 million years leading up to the Cambrian explosion, half a billion years ago.
My first observation is that evolution has been slowing down in information-theoretic terms, while speeding up in terms of the intelligence produced. This means that adding information to the gene pool increases the effective intelligence that can be produced using that information by a more-than-linear amount.
In the first of several irresponsible assumptions I'm going to make, let's assume that the information evolved in time t is proportional to i = log(t), while the intelligence evolved is proportional to et = ee^i. I haven't done the math to support those particular functions; but I'm confident that they fit the data better than linear functions would. (This assumption is key, and the data should be studied more closely before taking my analysis too seriously.)
My second observation is that evolution occurs in spurts. There's a lot of data to support this, including data from simulated evolution; see in particular the theory of punctuated equilibrium, and the data from various simulations of evolution in Artificial Life and Artificial Life II. But I want to single out the eukaryote-to-Cambrian-explosion spurt. The evolution of the first eukaryotic cell suddenly made a large subset of organism-space more accessible; and the speed of evolution, which normally decreases over time, instead increased for tens of millions of years.
Science!
The following discussion relies largely on de Solla Price's Little Science, Big Science (1963), Nicholas Rescher's Scientific Progress: A Philosophical Essay on the Economics of Research in Natural Science (1978), and the data I presented in my 2004 TransVision talk, "The myth of accelerating change".
The growth of "raw" scientific knowledge is exponential by most measures: Number of scientists, number of degrees granted, number of journals, number of journal articles, number of dollars spent. Most of these measures have a doubling time of 10-15 years. (GDP has a doubling time closer to 20 years, suggesting that the ultimate limits on knowledge may be economic.)
The growth of "important" scientific knowledge, measured by journal citations, discoveries considered worth mentioning in histories of science, and perceived social change, is much slower; if it is exponential, it appears IMHO to have had a doubling time of 50-100 years between 1600 and 1940. (It can be argued that this growth began slowing down at the onset of World War II, and more dramatically around 1970). Nicholas Rescher argues that important knowledge = log(raw information).
A simple argument supporting this is that "important" knowledge is the number of distinctions you can make in the world; and the number of distinctions you can draw based on a set of examples is of course proportional to the log of the size of your data set, assuming that the different distinctions are independent and equiprobable, and your data set is random. However, an opposing argument is that log(i) is simply the amount of non-redundant information present in a database with uncompressed information i. (This appears to be approximately the case for genetic sequences. IMHO it is unlikely that scientific knowledge is that redundant; but that's just a guess.) Therefore, important knowledge is somewhere between O(log(information)) and O(information), depending whether information is closer to O(raw information) or O(log(raw information)).
Analysis
We see two completely-opposite pictures: In evolution, the efficaciousness of information increases more-than-exponentially with the amount of information. In science, it increases somewhere between logarithmically and linearly.
My final irreponsible assumption will be that the production of ideas, concepts, theories, and inventions ("important knowledge") from raw information, is analogous to the production of intelligence from gene-pool information. Therefore, evolution's efficacy at using the information present in the gene pool can give us a lower bound on the amount of useful knowledge that could be extracted from our raw scientific knowledge.
I argued above that the amount of intelligence produced from a given gene-information-pool i is approximately e^ei, while the amount of useful knowledge we extract from raw information i is somewhere between O(i) and O(log(i)). The implication is that the fraction of discoveries that we have made, out of those that could be made from the information we already have, has an upper bound between O(1/e^e^i) and O(1/e^e^e^i).
One key question in asking what the shape of AI takeoff will be, is therefore: Will AI's efficiency at drawing inferences from information be closer to that of humans, or that of evolution?
If the latter, then the number of important discoveries that an AI could make, using only the information we already have, may be between e^e^i and e^e^e^i times the number of important discoveries that we have made from it. i is a large number representing the total information available to humanity. e^e^i is a goddamn large number. e^e^e^i is an awful goddamn large number. Where before, we predicted FOOM, we would then predict FOOM^FOOM^FOOM^FOOM.
Furthermore, the development of the first AI will be, I think, analogous to the evolution of the first eukaryote, in terms of suddenly making available a large space of possible organisms. I therefore expect the pace of information generation by evolution to suddenly switch from falling, to increasing, even before taking into account recursive self-improvement. This means that the rate of information increase will be much greater than can be extrapolated from present trends. Supposing that the rate of acquisition of important knowledge will change from log(i=et) to et gives us FOOM^FOOM^FOOM^FOOM^FOOM, or 4FOOM.
This doesn't necessarily mean a hard takeoff. "Hard takeoff" means, IMHO, FOOM in less than 6 months. Reaching the e^e^e^i level of efficiency would require vast computational resources, even given the right algorithms; an analysis might find that the universe doesn't have enough computronium to even represent, let alone reason over, that space. (In fact, this brings up the interesting possibility that the ultimate limits of knowledge will be storage capacity: Our AI descendants will eventually reach the point where they need to delete knowledge from their collective memory in order to have the space to learn something new.)
However, I think this does mean FOOM. It's just a question of when.
ADDED: Most commenters are losing sight of the overall argument. This is the argument: