One of biases that are extremely prevalent in science, but are rarely talked about anywhere, is bias towards models that are mathematically simple and easier to operate on. Nature doesn't care all that much for mathematical simplicity. In particular I'd say that as a good first approximation, if you think something fits exponential function of either growth or decay, you're wrong. We got so used to exponential functions and how convenient they are to work with, that we completely forgot the nature doesn't work that way.
But what about nuclear decay, you might be asking now... That's as close you get to real exponential decay as you get... and it's not nowhere close enough. Well, here's a log-log graph of Chernobyl release versus theoretical exponential function, plotted in log-log.
Well, that doesn't look all that exponential... The thing is that even if you have perfect exponential decay processes as with single nucleotide decay, when you start mixing a heterogeneous group of such processes, the exponential character is lost. Early in time faster-decaying cases dominate, then gradually those that decay more slowly, somewhere along the way you might have to deal with results of decay (pure depleted uranium gets more radioactive with time at first, not less, as it decays into low half-life nuclides), and perhaps even some processes you didn't have to consider (like creation of fresh radioactive nuclides via cosmic radiation).
And that's the ideal case of counting how much radiation a sample produces, where the underlying process is exponential by the basic laws of physics - it still gets us orders of magnitude wrong. When you're measuring something much more vague, and with much more complicated underlying mechanisms, like changes in population, economy, or processing power.
According to IMF, world economy in 2008 was worth 69 trillion $ PPP. Assuming 2% annual growth and naive growth models, the entire world economy produces 12 cents PPP worth of value in entire first century. And assuming fairly stable population, an average person in 3150 will produce more that the entire world does now. And with enough time dollar value of one hydrogen atom will be higher than current dollar value of everything on Earth. And of course with proper time discounting of utility, life of one person now is worth more than half of humanity millennium into the future - exponential growth and exponential decay are both equally wrong.
To me they all look like clear artifacts of our growth models, but there are people who are so used to them that they treat predictions like that seriously.
In case you're wondering, here are some estimates of past world GDP.
I have this vague idea that sometime in our past, people thought that knowledge was like an almanac; a repository of zillions of tiny true facts that summed up to being able to predict stuff about stuff, but without a general understanding of how things work. There was no general understanding because any heuristic that would begin to explain how things work would immediately be discounted by the single tiny fact, easily found, that contradicted it. Details and concern with minutia and complexity is actually anti-science for this reason. It’s not that details and complexity aren’t important, but you make no progress if you consider them from the beginning.
And then I wondered: is this knee-jerk reaction to dismiss any challenge of the keep-it-simple conventional wisdom the reason why we’re not making more progress in complex fields like biology?
For classical physics it has been the case that the simpler the hypothetical model you verify, the more you cash out in terms of understanding physics. The simpler the hypothesis you test, the easier it is to establish if the hypothesis is true and the more you learn about physics if it is true. However, what considering and verifying simpler and simpler hypotheses actually does is transfer the difficulty of understanding the real-world problem to the experimental set-up. To verify your super-simple hypothesis, you need to eliminate confounding factors from the experiment. Success in classical physics has occurred because when experiments were done, confounding factors could be eliminated through a well-designed set-up or were small enough to neglect. (Consider Galileo’s purported experiment of dropping two objects from a height – in real life that particular experiment doesn’t work because the lighter object may fall more slowly.)
In complex fields this type of modeling via simplification doesn’t seem to cash out as well, because it’s more difficult to control the experimental set-up and the confounding effects aren't negligible. So while I've always believed that models need to be simple, I would consider a different paradigm if it could work. How could understanding the world work any other way than through simple models?
Some method trends in biology: high through-put, random searches, brute force, etc.
Biology is a special case of physics. Physicists may at some point arrive at a Grand Unified Theory of Everything that theoretically implies all of biology.
Biology is the classification and understanding of the complicated results of physics, so it is in many ways basically an almanac.