But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right?
If the steps are sequential, the time to evolve human intelligence is the sum of many independent small steps (without a cutoff) giving you a roughly normal distribution. So if there were a billion steps with step times of a million years, you would expect us to find ourselves much closer to the end of Earth's habitable window.
Why think that overall it gives a boost for the easy intelligence hypothesis?
Let's take as our starting point that intelligence is difficult enough to occur in less than 1% of star systems like ours. One supporting argument is that if we started with a flattish prior over difficulty much of the credence for intelligence being at least that easy to evolve would have been on scenarios in which intelligence was easy enough to reliably develop near the beginning of Earth's habitable window. [See Carter (1983)] Another is the Great Filter, the lack of visible alien intelligence.
So we need some barriers to evolution of intelligence. The hard steps analysis then places limits on their number, and stronger limits on the number that have been made since the development of brains, or primates, suggesting that they will collectively be much easier for engineers to work around than a random draw from our distribution after updating on the above considerations, but before considering the hard steps models.
We had more explanation of this, cut for space constraints. Perhaps we should reinstate it.
If you're interested in evolution, anthropics, and AI timelines -- or in what the Singularity Institute has been producing lately -- you might want to check out this new paper, by SingInst research fellow Carl Shulman and FHI professor Nick Bostrom.
The paper:
How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects
The abstract:
Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers will see intelligent life having arisen on their planet no matter how hard it is for intelligent life to evolve on any given Earth-like planet. We explore how the evolutionary argument might be salvaged from this objection, using a variety of considerations from observation selection theory and analysis of specific timing features and instances of convergent evolution in the terrestrial evolutionary record. We find that a probabilistic version of the evolutionary argument emerges largely intact once appropriate corrections have been made.
I'd be interested to hear LW-ers' takes on the content; Carl, too, would much appreciate feedback.