I'm a little puzzled that you don't include such obvious examples of attempted long-term schemes as Soviet (and others') five-year plans, or dirigisme more generally. Another class of human plans intended to have long-term effects is peace treaties and settlements; probably the diplomats at Versailles did not literally think they were making a peace for all time, but surely they expected to do better than having another shattering conflict in their own lifetimes. Conversely, at Vienna the Great Powers probably intended their settlement to last thirty or fifty years, but would likely have been surprised to learn that the next Great War would be almost 100 years later. And I doubt the negotiators at Westphalia thought they were creating a concept of statehood that we would still be using 350 years later; they may well have thought they were patching up yet another temporary truce to allow shattered armies and stripped economies to recover a bit.
With cryptography, the government attempted to delay mainstream access to the technology - so they could benefit from using it. It would be interesting to know if they are doing the same to mainstream machine intelligence efforts - for example, via intellectual property laws and secrecy orders.
Can you recommend any good histories of the government's attempts to delay mainstream access to new cryptographic techniques?
James Bamford's books in this area are very readable:
The classic history of the field is this one, but you'll get some coverage of the topic in practically any popular book on cryptography.
One factor that will be difficult to evaluate is how predictions have interacted with later events. Warnings can (at times) be heeded and risks avoided. Those most difficult cases might be precisely the ones of greatest interest given your aims of shifting humanity's odds.
A related question is how much impact these predictions had (aside from their accuracy). Things like Limits to Growth or The Population Bomb were extremely influential in spite of their predictive failures (once again, leaving the hypothesis that they served as self-refuting prophecies).
Once you have a better sense of these cases, it will also be interesting to evaluate how responses developed. Were the authors or predictors influential in the resulting actions? You mention at least one case in the email thread where the author was shut out of later efforts due to the prediction (Drexler). I'd be curious to see how the triggers interacted with the resulting movements or responses (if any).
In order to avoid selection bias, it would be good to define some domain(s) and then study all long-term predictions in those domains. Something along the lines of "the first 100 studies listed in a search of such-and-such database using these keywords". Note, I haven't read the full email exchange, so if this is addressed there, I apologize for wasting your time.
Jonah's impression is that Weiner had strong views on the subject, doesn't seem to have updated much in response to incoming evidence
My impression is Jonah may have gotten wrong impressions of Wiener's views. I also didn't see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
My impression is Jonah may have gotten wrong impressions of Wiener's views.
I responded here.
I also didn't see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
I wrote this in our full email exchange and didn't provide justification. I no longer remember what I had in mind, and I may not have had good reasons for saying that.
My best guess is that I was thinking something along the lines of "he didn't investigate sufficiently thoroughly to solicit and understand other people's opinions on the subject," but this is coming primarily from a general strong prior that people don't solicit other perspectives and try to understand them, rather than anything specific to Wiener, and I recognize that there's room for disagreement as to what prior is appropriate.
but this is coming primarily from a general strong prior that people don't solicit other perspectives and try to understand them, rather than anything specific to Wiener
It seems really wrong for you to state any conclusions based solely on your prior, since the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don't you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
It seems really wrong for you to state any conclusions based solely on your prior
Morally wrong? ;)
the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don't you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
I didn't come across evidence that Wiener did update his beliefs.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I'm not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he "doesn't seem to have updated much in response to incoming evidence".
Perhaps one update may be that unemployment isn't as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don't you think he would have repeated his warnings in an article dedicated to "the social consequences of [cybernetic techniques]"?
Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke's post?
As a meta-remark, I think that you're being unnecessarily combative / aggressive.
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn't dramatically increased unemployment (ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn't be taken as evidence that it's not possible to make predictions about AI. My original justification for this takeaway was "Wiener was wrong, but his methodology was bad." Your view seems to be "Wiener wasn't wrong," but while different from what I said, this is is also a justification for the takeaway. So I don't think that it matters much either way.
As a meta-remark, I think that you're being unnecessarily combative / aggressive.
Thanks for the feedback.
Your view seems to be "Wiener wasn't wrong," but while different from what I said, this is is also a justification for the takeaway. So I don't think that it matters much either way.
Ok, this makes your position more understandable. I guess I was thinking that Wiener's case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
As a meta-remark, I think that you're being unnecessarily combative / aggressive.
A third party perspective: I hadn't noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I'd want to make in future threads would be to avoid language like "really wrong" and "a poor excuse" in favour of less loaded terms like "a big mistake" or "not a good reason".
Cross-posted from MIRI's blog.
MIRI aims to do research now that increases humanity's odds of successfully managing important AI-related events that are at least a few decades away. Thus, we'd like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?
To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.docx) so far.
We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren't yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver's The Signal and the Noise, available here.
Further details are given below. For sources and more, please see our full email exchange (.docx).
The Limits to Growth
In his initial look at The Limits to Growth (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that Limits to Growth predicted a sort of doomsday scenario - ala Ehrlich's The Population Bomb (1968) - that had failed to occur. In particular, it appeared that Limits to Growth had failed to appreciate Julian Simon's point that other resources would substitute for depleted resources. Upon reading the book, Jonah found that:
Svante Arrhenius
Derived more than a century ago, Svante Arrhenius' equation for how the Earth's temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius' climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.
Arrhenius's predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.
Norbert Wiener
As Jonah explains, Norbert Wiener (1894-1964) "believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression." Nearly 50 years after his death, this doesn't seem to have happened much, though it may eventually happen.
Jonah's impression is that Wiener had strong views on the subject, doesn't seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what Berlin (1953) and Tetlock (2005) described as "hedgehog" thinking: "the fox knows many things, but the hedgehog knows one big thing."
Some historical cases that seem unlikely to shed light on our questions
Rasmussen (1975) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn't very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.
In 1936, Leó Szilárd assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:
Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was "too difficult to tie these efforts to war outcomes."
Jonah also investigated Kaj Sotala's A brief history of ethically concerned scientists. Most of the historical cases cited there didn't seem relevant to this project. Many cases involved "scientists concealing their discoveries out of concern that they would be used for military purposes," but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see Kelly 2011). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.
Some historical cases that might shed light on our questions with much additional research
Jonah performed an initial investigation of the impacts of China's one-child policy, and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy's impacts.
Jonah also investigated a case involving the Ford Foundation. In a conversation with GiveWell, Lant Pritchett said:
Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.
Other historical cases that might be worth investigating
Historical cases we identified but did not yet investigate include: