Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
- Introduction (this post)
- Humanity's Efforts So Far
- A Timeline of Early Ideas and Arguments
- Questions We Want Answered
- Strategic Analysis Via Probability Tree
- Intelligence Amplification and Friendly AI
- ...
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
- Muehlhauser & Salamon, Intelligence Explosion: Evidence and Import (2013)
- Yudkowsky, AI as a Positive and Negative Factor in Global Risk (2008)
- Chalmers, The Singularity: A Philosophical Analysis (2010)
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
- What methods can we use to predict technological development?
- Which kinds of differential technological development should we encourage, and how?
- Which open problems are safe to discuss, and which are potentially dangerous?
- What can we do to reduce the risk of an AI arms race?
- What can we do to raise the "sanity waterline," and how much will this help?
- What can we do to attract more funding, support, and research to x-risk reduction and to specific sub-problems of successful Singularity navigation?
- Which interventions should we prioritize?
- How should x-risk reducers and AI safety researchers interact with governments and corporations?
- How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
- How does AI risk compare to other existential risks?
- Which problems do we need to solve, and which ones can we have an AI solve?
- How can we develop microeconomic models of WBEs and self-improving systems?
- How can we be sure a Friendly AI development team will be altruistic?
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
- How hard is it to create Friendly AI?
- What is the strength of feedback from neuroscience to AI rather than brain emulation?
- Is there a safe way to do uploads, where they don't turn into neuromorphic AI?
- How possible is it to do FAI research on a seastead?
- How much must we spend on security when developing a Friendly AI team?
- What's the best way to recruit talent toward working on AI risks?
- How difficult is stabilizing the world so we can work on Friendly AI slowly?
- How hard will a takeoff be?
- What is the value of strategy vs. object-level progress toward a positive Singularity?
- How feasible is Oracle AI?
- Can we convert environmentalists into people concerned with existential risk?
- Is there no such thing as bad publicity [for AI risk reduction] purposes?
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.
Dear gwern. It is true the Bradley Manning types within the Army are somewhat intelligent thus some roles in the Arny require a modicum of intelligence, such as being an officer but it should be noted officers are not rocket scientists on the intelligence scales.
You should however note I was referring to the soldiers who actually commit the violent acts, thereby frequently getting themselves maimed or killed; these military personnel are stupid because it is stupid to put yourself needlessly into a dangerous, life threatening situation.
Regarding stupidity and violence in relation to the Army I was referring to the "Grunts", the "cannon fodder", the fools who kill and get themselves killed.
http://en.wikipedia.org/wiki/Cannon_fodder
I am unsure regarding the actual meaning of the term "Grunts", applied to infantrymen, but for me it is a derogatory term indicating a dim-witted pant-hooting grunting ape who doesn't have the intelligence to realise joining the army as a Grunt is not good for survival thus some would say stupid but I realise the Army doesn't accept clinically retarded Grunts, the soldiers merely need to be retarded in the general idiomatic sense of the word regarding popular culture.
Here is a recent news report about troops being killed. http://www.dailymail.co.uk/news/article-2111984/So-young-brave-Faces-British-soldiers-killed-Taliban-bomb--didnt-make-past-age-21.html
Do these dead men look intelligent? I wonder if they were signed up for cryro-preservation?
Few people are. Officers can be quite intelligent and well-educated people. The military academies are some of the best educational institutions around, with selection standards more comparable to Harvard than community college. In one of my own communities, Haskell programmers, the top purely functional data structure guys, Okasaki, is a West Point instructor.
... (read more)