The solution space is large enough that even proteins sampling it's points at a rate of trillions per second couldn't really fold if they were just searching randomly through all possible configurations, that would be NP complete. They don't actually do this of course. Instead they fold piece by piece as they are produced, with local interactions forming domains which tend to retain their approximate structure once they come together to form a whole protein. They don't enter the lowest possible energy state therefore. Prion diseases are an example of what can happen when proteins enter a normally inaccessible local energy minimum, which in that case happens to have a snowballing effect on other proteins.
The result is that they follow grooves in the energy landscape towards an energy well which is robust enough to withstand all sorts of variation, including the horrific inaccuracies of our attempts at modeling. Our energy functions are just very crude approximations to the real one, which is dependent on quantum level effects and therefore intractable. Another issue is that proteins don't fold in isolation - they interact with chaperone proteins and all sorts of other crap. So simulating their folding might require one to simulate a LOT of other things besides just the protein in question.
Even our ridiculously poor attempts at in silico folding are not completely useless though. They can even be improved with the help of the human brain (see Foldit). I think an A.I. should make good progress on the proteins that exist. Even if it can't design arbitrary new ones from scratch, intelligent modification of existing ones would likely be enough to get almost anything done. Note also that an A.I. with that much power wouldn't be limited to what already exists, technology is already in the works to produce arbitrary non-protein polymers using ribosome like systems, stuff like that would open up an unimaginably large space of solutions that existing biology doesn't have access to.
Summary: Intelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, "reinvestable returns on cognitive investments": what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.
(Sample idea you haven't heard before: The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)
I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive 'technological acceleration'. MIRI has an intended-to-be-small-and-technical mailing list for such discussion. In case it's not clear from context, I (Yudkowsky) am the author of the paper.
Abstract:
The dedicated mailing list will be small and restricted to technical discussants.
This topic was originally intended to be a sequence in Open Problems in Friendly AI, but further work produced something compacted beyond where it could be easily broken up into subposts.
Outline of contents:
1: Introduces the basic questions and the key quantitative issue of sustained reinvestable returns on cognitive investments.
2: Discusses the basic language for talking about the intelligence explosion, and argues that we should pursue this project by looking for underlying microfoundations, not by pursuing analogies to allegedly similar historical events.
3: Goes into detail on what I see as the main arguments for a fast intelligence explosion, constituting the bulk of the paper with the following subsections:
4: A tentative methodology for formalizing theories of the intelligence explosion - a project of formalizing possible microfoundations and explicitly stating their alleged relation to historical experience, such that some possibilities can allegedly be falsified.
5: Which open sub-questions seem both high-value and possibly answerable.
6: Formally poses the Open Problem and mentions what it would take for MIRI itself to directly fund further work in this field.