More (#5) from Worst-Case Scenarios:
Objection 4: [Knightian] uncertainty is too infrequent to be a genuine source of concern for purposes of policy and law Perhaps regulatory problems, including those mentioned here, hardly ever involve genuine uncertainty. Perhaps regulators are usually able to assign probabilities to outcomes; and where they cannot, perhaps they can instead assign probabilities to probabilities (or where this proves impossible, probabilities to probabilities of probabilities). For example, we have a lot of information about the orbits of asteroids, and good reason to believe that the risk of a devastating collision is very small. In many cases, such as catastrophic terrorist attack, regulators might be able to specify a range of probabilities-say, above 0 percent but below 5 percent. Or they might be able to say that the probability that climate change presents a risk of catastrophe is, at most, 20 percent. Some scientists and economists believe that climate change is unlikely to create catastrophic harm, and that the real costs, human and economic, will be high but not intolerable. In their view, the worst-case scenarios can be responsibly described as improbable.
Perhaps we can agree that pure uncertainty is rare. Perhaps we can agree that, at worst, regulatory problems involve problems of "bounded uncertainty," in which we cannot specify probabilities within particular bands. Maybe the risk of a catastrophic outcome is above 1 percent and below 10 percent, but maybe within that band it is impossible to assign probabilities. A sensible approach, then, would be to ask planners to identify a wide range of possible scenarios and to select approaches that do well for most or all of them. Of course, the pervasiveness of uncertainty depends on what is actually known, and in the case of climate change, people dispute what is actually known. Richard Posner believes that "no probabilities can be attached to the catastrophic global-warming scenarios, and without an estimate of probabilities an expected cost cannot be calculated." A 1994 survey of experts showed an extraordinary range of estimated losses from climate change, varying from no economic loss to a 20 percent decrease in gross world product - a catastrophic decline in the world's well-being.
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know: