I'm currently working on a research project for MIRI, and I would welcome feedback on my research as I proceed. In this post, I describe the project.
As a part of an effort to steel-man objections to MIRI's mission, MIRI Executive Director Luke Muehlhauser has asked me to develop the following objection:
"Even if AI is somewhat likely to arrive during the latter half of this century, how on earth can we know what to do about it now, so far in advance?"
In Luke's initial email to me, he wrote:
I think there are plausibly many weak arguments and historical examples suggesting that P: "it's very hard to nudge specific distant events in a positive direction through highly targeted actions or policies undertaken today." Targeted actions might have no lasting effect, or they might completely miss their mark, or they might backfire.
If P is true, this would weigh against the view that a highly targeted intervention today (e.g. Yudkowsky's Friendly AI math research) is likely to positively affect the future creation of AI, and might instead weigh in favor of the view that all we can do about AGI from this distance is to engage in broad interventions likely to improve our odds of wisely handling future crises in general — e.g. improving decision-making institutions, spreading rationality, etc.
I'm interested in abstract arguments for P, but I'm even more interested in historical data. What can we learn from seemingly analogous cases, and are those cases analogous in the relevant ways? What sorts of counterfactual history can we do to clarify our picture?
Luke and I brainstormed a list of potential historical examples of people predicting the future 10+ years out, and using the predictions to inform their actions. We came up with the following potential examples, which I've listed in chronological order by approximate year:
- 1896: Svante Arrhenius's prediction of anthropogenic climate change.
- 1935: Leo Szilard's ~1935 attempts to keep his patent of the atomic bomb secret from Germany.
- 1950-1980: Efforts to win the Cold War decades later, such as increasing education for gifted children.
- 1960: Norbert Weiner highlighting the dangers of artificial intelligence.
- 1972: The circle of ideas and actions around the The Limits to Growth, a book about the consequences of unchecked population growth and economic growth.
- 1975: The WASH-1400 reactor safety study, which attempted to assess the risks associated with nuclear reactors.
- 1975: The Asilomar Conference on Recombinant DNA, which set up guidelines to ensure the safety of recombinant DNA technology.
- 1978: China's one-child policy to reduce population growth.
- 1980: The Ford Foundation setting up a policy think in India that helped India recover from its 1991 financial crisis
- 1988: Early climate change mitigation efforts
- 1992+: Asteroid strike deflection efforts
- ???: Possible deliberate long term efforts to produce revolutionary scientific technologies.
- ????: Long term computer security research
- The Signal and the Noise: Why So Many Predictions Fail — but Some Don't by Nate Silver
- Expert Political Judgment: How Good Is It? How Can We Know? by Philip Tetlock
Solution: You can compose posts in Markdown, which is more readable than HTML. When done, use the Markdown Dingus to convert this into clean HTML, and paste this into LW's HTML editor. That's what I do.
Bookmarked.