One factor that will be difficult to evaluate is how predictions have interacted with later events. Warnings can (at times) be heeded and risks avoided. Those most difficult cases might be precisely the ones of greatest interest given your aims of shifting humanity's odds.
A related question is how much impact these predictions had (aside from their accuracy). Things like Limits to Growth or The Population Bomb were extremely influential in spite of their predictive failures (once again, leaving the hypothesis that they served as self-refuting prophecies).
Once you have a better sense of these cases, it will also be interesting to evaluate how responses developed. Were the authors or predictors influential in the resulting actions? You mention at least one case in the email thread where the author was shut out of later efforts due to the prediction (Drexler). I'd be curious to see how the triggers interacted with the resulting movements or responses (if any).
Cross-posted from MIRI's blog.
MIRI aims to do research now that increases humanity's odds of successfully managing important AI-related events that are at least a few decades away. Thus, we'd like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?
To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.docx) so far.
We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren't yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver's The Signal and the Noise, available here.
Further details are given below. For sources and more, please see our full email exchange (.docx).
The Limits to Growth
In his initial look at The Limits to Growth (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that Limits to Growth predicted a sort of doomsday scenario - ala Ehrlich's The Population Bomb (1968) - that had failed to occur. In particular, it appeared that Limits to Growth had failed to appreciate Julian Simon's point that other resources would substitute for depleted resources. Upon reading the book, Jonah found that:
Svante Arrhenius
Derived more than a century ago, Svante Arrhenius' equation for how the Earth's temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius' climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.
Arrhenius's predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.
Norbert Wiener
As Jonah explains, Norbert Wiener (1894-1964) "believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression." Nearly 50 years after his death, this doesn't seem to have happened much, though it may eventually happen.
Jonah's impression is that Wiener had strong views on the subject, doesn't seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what Berlin (1953) and Tetlock (2005) described as "hedgehog" thinking: "the fox knows many things, but the hedgehog knows one big thing."
Some historical cases that seem unlikely to shed light on our questions
Rasmussen (1975) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn't very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.
In 1936, Leó Szilárd assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:
Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was "too difficult to tie these efforts to war outcomes."
Jonah also investigated Kaj Sotala's A brief history of ethically concerned scientists. Most of the historical cases cited there didn't seem relevant to this project. Many cases involved "scientists concealing their discoveries out of concern that they would be used for military purposes," but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see Kelly 2011). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.
Some historical cases that might shed light on our questions with much additional research
Jonah performed an initial investigation of the impacts of China's one-child policy, and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy's impacts.
Jonah also investigated a case involving the Ford Foundation. In a conversation with GiveWell, Lant Pritchett said:
Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.
Other historical cases that might be worth investigating
Historical cases we identified but did not yet investigate include: