It's more a relative thing---"not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be".
If so, then we're actually more rational right? Because we're not biased against academia as most people are, and aren't biased toward academia as most academics are.
Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.
This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.
I don't think it's quite true that "fail once, fail forever", but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn't seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It's also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It's not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don't really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you're quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don't think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don't think Iran's government is trustworthy enough to deal with these risks. There aren't enough safeguards against unfriendly AI or incentives to develop friendly AI, but that's something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.