In 2005, Hurricane Rita caused 111 deaths. 3 deaths were caused by the hurricane. 90 were caused by the mass evacuation.
The FDA is supposed to approve new drugs and procedures if the expected benefits outweigh the expected costs. If they actually did this, their errors on both sides (approvals of bad drugs vs. rejections of good drugs) would be roughly equal. The most-publicized drug withdrawal in the past 10 years was that of Vioxx, which the FDA estimated killed a total of 5165 people over 5 years. This suggests that the best drug that the FDA rejected during that decade could have saved 1000 people/year. During that decade, many drugs were (or could have been) approved that might save more than that many lives every year. Gleevec (invented 1993, approved 2001) is believed to save about 10,000 lives a year. Herceptin (invented in the 1980s, began human trials 1991, approved for some patients in 1998, more in 2006, and more in 2010) was estimated to save 1,000 lives a year in the United Kingdom, which would translate to 5,000 lives a year in the US. Patients on Apibaxan (discovered in 2006, not yet approved) have 11% fewer deaths from stroke than patients on warfarin, and stroke causes about 140,000 deaths/year in the US. To stay below the expected drug-rejection error level of 1000 people/year, given just these three drugs (and assuming that Apibaxan pans out and can save 5,000 lives/year), the FDA would need to have a faulty-rejection rate F such that F(10000) + F(5000) + F(5000) < 1000, F < 5%. This seems unlikely.
ADDED: One area where this affects me every day is in branching software repositories. Every software developer agrees that branching the repository head for test versions and for production versions is good practice. Yet branching causes, I would estimate, at least half of our problems with test and production releases. It is common for me to be delayed one to three days while someone figures out that the software isn't running because they issued a patch on one branch and forgot to update the trunk, or forgot to update other development or test versions that are on separate branches. I don't believe in branching anymore - I think we would have fewer bugs if we just did all development on the trunk, and checked out the code when it worked. Branching is good for humongous projects where you have public releases that you can't patch on the head, like Firefox or Linux. But it's out of place for in-house projects where you can just patch the head and re-checkout. The evidence for this in my personal experience as a software developer is overwhelming; yet whenever I suggest not branching, I'm met with incredulity.
Exercise for the reader: Find other cases where cautionary measures are <EDIT>taken past the point of marginal utility</EDIT>.
ADDED: I think that this is the problem: You have observed a distribution of outcome utilities from some category of event followed by you taking some action A. You observe a new instance of this event. You want to predict the outcome utility of action A for this event.
Some categories have a power-law outcome distribution with a negative exponent b, indicating there are fewer events of large importance: number of events of size U = ec - bU. Assume that you don't observe all possible values of U. Events of importance < U0 are too small to observe; and events with large U are very uncommon. It is then difficult to tell whether the category has a power-law distribution without a lot of previous observations.
If a lot of event categories have a distribution like this, where big impacts are bad, and they are usually insignificant but sometimes catastrophic, then it's likely rational to treat these events as if they will be catastrophic. And if you don't have enough observations to know if the distribution is a power-law, or something else, it's rational to treat it as if it were a power-law distribution to be safe.
Could this account for the human risk-aversion "bias"?
If you are the FDA, you are faced with situations where the utility distribution is probably such a power-law distribution mirrored around zero, so there are a few events with very high utility (save lots of lives), and a similar number of events with the negative of that utility (lose that many lives). I would guess that situations like that are rare in our ancestral environment, though I don't know.
Also, this is technically not correct:
Actually, if the FDA really did this the marginal -- in this case, most-dangerous -- drug approved should kill as many people as it save. But since every drug before that would save more people as it killed, on net there should be more people saved than killed.
Yay for marginal cost does not equal average cost!