Doesn't information have to be about something? Bits are not inherently powerful... proteins are about structure but they do not inherently win you any evolutionary races. I'd contend that there is a lot more information about how to survive in which proteins are in the genome and when they are transcribed.
You seem to mixing up bits needed to replicate the genome with bits of information gained about the outside world and how to survive in it.
Edit: To give you an example of the difference. Consider a standard computer program that does something useful in ...
The idea that pace of discovery slowed down is an extremely common and really obvious fallacy.
We only know that discovery was important after it gets widely implemented, what happens decades after invention. Yet, we count it as happening not at implementation time, but at invention time. So recent discoveries that will be implemented in the future are not counted at all, artificially lowering our discovery importance counts.
Also if you use silly measures like railroad tracks per person, or max land mph, you will obviously not see much progress, as large pa...
Re: ""Hard takeoff" means, IMHO, FOOM in less than 6 months."
Nobody ever specifies when the clock is going to start ticking. We already have speech recognition, search oracles, stockmarket wizards, and industrial automation on a massive scale. Machine intelligence has been under construction for at least 60 years - and machines have been taking people's jobs for over 100 years.
If your clock isn't ticking by now, then what exactly are you waiting for?
Here's another interesting set of quotes; are we even correct in assuming the most recent percent of DNA matters much? After all, chimps outperform humans in some areas like monkey ladder. From "If a Lion Could Talk":
..."Giving a blind person a written IQ test is obviously not a very mean meaningful evaluation of his mental abilities. Yet that is exactly what many cross-species intelligence tests have done. Monkeys, for example, were found not only to learn visual discrimination tasks but to improve over a series of such tasks -- they formed
In the first of several irresponsible assumptions I'm going to make, let's assume that the information evolved in time t is proportional to i = log(t), while the intelligence evolved is proportional to et = ee^i. I haven't done the math to support those particular functions; but I'm confident that they fit the data better than linear functions would.
This may be covered by the following assumption about 'spurts', but this doesn't seem to work for me.
If intelligence really could jump like that, shouldn't we expect to see that in humans already? For examp...
Have you read "The 10,000 Year Explosion"? Cochran & Harpending (and Hawks and some others in the paper its based on) argue that evolution has accelerated recently. The reason is that there is a larger population, so more new mutations to be selected. Also, because our environment is not a steady state our genes don't reach a steady state either (like horseshoe crabs or a number of other species). I've only read a bit past the first chapter, but it would seem relevant to your claim.
What's your justification for the claim that "almost all of the information content of an organism resides in the amino-acid sequence of its domains"?
For your claims about "speed of evolution" to make any sense, it must be the case that we could get rid of the information content which does not reside in these sequences with minimal losses in evolutionary fitness. My guess is that this is not the case, hence your measure of "information" is quite suspect.
Even the concepts involved in this "analysis" seem pretty meaningless, but when you start plugging them into "math" and "exponentials", it results in meaninglessness singularity, that is a point at which all sanity breaks down and woo ensues!
May be relevant, and seems to be consistent with your point: evolution has a speed limit and complexity bound.
I think this post presents a very interesting view of the information explosion. Even the task of self-improvement will undergo an evolution of sorts, and we have no better example to draw from than genetic evolution. We have observed an increasing efficiency of information to directed behavior (intelligence as the article puts it), and it is yet to be seen what the limits of that efficiency may be.
Only one upvote? Really?
Long lived organisms do reproduce and evolve more slowly - though note that evolution still acts on their germ line cells during their lifetime.
However, to jump from there to "evolution has been slowing down in information-theoretic terms" seems like a bit of a wild leap. Bacteria haven't gone away - and they are evolving as fast as ever. How come their evolution is not being counted?
I can't say I care much for the Evolution/Science split in this post.
A more natural split would be between DNA evolution and cultural evolution.
Evolution is best seen as an umbrella term that covers any copying-with-variation-and-selection process - and science is one example of cultural evolution.
Re: "I therefore expect the pace of evolution to suddenly switch from falling, to increasing [...]"
Since the pace of evolution has clearly been increasing recently, this seems like a rather retroactive prediction.
Sadly we here observe a retreat within the simple language of mathematics. I am not decrying mathematics nor am I underestimating the great value of that language in extending knowledge of the physical world by bypassing the complexities and irrelevancies common to the natural languages.
It does, however, suffer from two major weaknesses:
Firstly, like all languages, it is capable of generating fictions - entities and scenarios which have no correspondence wit the real world.
Secondly, it is, like all reasoning or computational processes, raw data sensitive....
Information is power. But how much power? This question is vital when considering the speed and the limits of post-singularity development. To address this question, consider 2 other domains in which information accumulates, and is translated into an ability to solve problems: Evolution, and science.
DNA Evolution
Genes code for proteins. Proteins are composed of modules called "domains"; a protein contains from 1 to dozens of domains. We classify genes into gene "families", which can be loosely defined as sets of genes that on average share >25% of their amino acid sequence and have a good alignment for >75% of their length. The number of genes and gene families known doubles every 28 months; but most "new" genes code for proteins that recombine previously-known domains in different orders.
Almost all of the information content of a genome resides in the amino-acid sequence of its domains; the rest mostly indicates what order to use domains in individual genes, and how genes regulate other genes. About 64% of domains (and 84% of those found in eukaryotes) evolved before eukaryotes split from prokaryotes about 2 billion years ago. (Michael Levitt, PNAS July 7 2009, "Nature of the protein universe"; D. Yooseph et al. "The Sorcerer II global ocean sampling expedition", PLoS Bio 5:e16.) (Prokaryotes are single-celled organisms lacking a nucleus, mitochondria, or gene introns. All multicellular organisms are eukaryotes.)
It's therefore accurate to say that most of the information generated by evolution was produced in the first one or two billion years; the development of more-complex organisms seems to have nearly stopped evolution of protein domains. (Multi-cellular organisms are much larger and live much longer; therefore there are many orders of magnitude fewer opportunities for selection in a given time period.) Similarly, most evolution within eukaryotes seems to have occurred during a period of about 50 million years leading up to the Cambrian explosion, half a billion years ago.
My first observation is that evolution has been slowing down in information-theoretic terms, while speeding up in terms of the intelligence produced. This means that adding information to the gene pool increases the effective intelligence that can be produced using that information by a more-than-linear amount.
In the first of several irresponsible assumptions I'm going to make, let's assume that the information evolved in time t is proportional to i = log(t), while the intelligence evolved is proportional to et = ee^i. I haven't done the math to support those particular functions; but I'm confident that they fit the data better than linear functions would. (This assumption is key, and the data should be studied more closely before taking my analysis too seriously.)
My second observation is that evolution occurs in spurts. There's a lot of data to support this, including data from simulated evolution; see in particular the theory of punctuated equilibrium, and the data from various simulations of evolution in Artificial Life and Artificial Life II. But I want to single out the eukaryote-to-Cambrian-explosion spurt. The evolution of the first eukaryotic cell suddenly made a large subset of organism-space more accessible; and the speed of evolution, which normally decreases over time, instead increased for tens of millions of years.
Science!
The following discussion relies largely on de Solla Price's Little Science, Big Science (1963), Nicholas Rescher's Scientific Progress: A Philosophical Essay on the Economics of Research in Natural Science (1978), and the data I presented in my 2004 TransVision talk, "The myth of accelerating change".
The growth of "raw" scientific knowledge is exponential by most measures: Number of scientists, number of degrees granted, number of journals, number of journal articles, number of dollars spent. Most of these measures have a doubling time of 10-15 years. (GDP has a doubling time closer to 20 years, suggesting that the ultimate limits on knowledge may be economic.)
The growth of "important" scientific knowledge, measured by journal citations, discoveries considered worth mentioning in histories of science, and perceived social change, is much slower; if it is exponential, it appears IMHO to have had a doubling time of 50-100 years between 1600 and 1940. (It can be argued that this growth began slowing down at the onset of World War II, and more dramatically around 1970). Nicholas Rescher argues that important knowledge = log(raw information).
A simple argument supporting this is that "important" knowledge is the number of distinctions you can make in the world; and the number of distinctions you can draw based on a set of examples is of course proportional to the log of the size of your data set, assuming that the different distinctions are independent and equiprobable, and your data set is random. However, an opposing argument is that log(i) is simply the amount of non-redundant information present in a database with uncompressed information i. (This appears to be approximately the case for genetic sequences. IMHO it is unlikely that scientific knowledge is that redundant; but that's just a guess.) Therefore, important knowledge is somewhere between O(log(information)) and O(information), depending whether information is closer to O(raw information) or O(log(raw information)).
Analysis
We see two completely-opposite pictures: In evolution, the efficaciousness of information increases more-than-exponentially with the amount of information. In science, it increases somewhere between logarithmically and linearly.
My final irreponsible assumption will be that the production of ideas, concepts, theories, and inventions ("important knowledge") from raw information, is analogous to the production of intelligence from gene-pool information. Therefore, evolution's efficacy at using the information present in the gene pool can give us a lower bound on the amount of useful knowledge that could be extracted from our raw scientific knowledge.
I argued above that the amount of intelligence produced from a given gene-information-pool i is approximately e^ei, while the amount of useful knowledge we extract from raw information i is somewhere between O(i) and O(log(i)). The implication is that the fraction of discoveries that we have made, out of those that could be made from the information we already have, has an upper bound between O(1/e^e^i) and O(1/e^e^e^i).
One key question in asking what the shape of AI takeoff will be, is therefore: Will AI's efficiency at drawing inferences from information be closer to that of humans, or that of evolution?
If the latter, then the number of important discoveries that an AI could make, using only the information we already have, may be between e^e^i and e^e^e^i times the number of important discoveries that we have made from it. i is a large number representing the total information available to humanity. e^e^i is a goddamn large number. e^e^e^i is an awful goddamn large number. Where before, we predicted FOOM, we would then predict FOOM^FOOM^FOOM^FOOM.
Furthermore, the development of the first AI will be, I think, analogous to the evolution of the first eukaryote, in terms of suddenly making available a large space of possible organisms. I therefore expect the pace of information generation by evolution to suddenly switch from falling, to increasing, even before taking into account recursive self-improvement. This means that the rate of information increase will be much greater than can be extrapolated from present trends. Supposing that the rate of acquisition of important knowledge will change from log(i=et) to et gives us FOOM^FOOM^FOOM^FOOM^FOOM, or 4FOOM.
This doesn't necessarily mean a hard takeoff. "Hard takeoff" means, IMHO, FOOM in less than 6 months. Reaching the e^e^e^i level of efficiency would require vast computational resources, even given the right algorithms; an analysis might find that the universe doesn't have enough computronium to even represent, let alone reason over, that space. (In fact, this brings up the interesting possibility that the ultimate limits of knowledge will be storage capacity: Our AI descendants will eventually reach the point where they need to delete knowledge from their collective memory in order to have the space to learn something new.)
However, I think this does mean FOOM. It's just a question of when.
ADDED: Most commenters are losing sight of the overall argument. This is the argument: