Dear JoshuaZ, regarding this:
"Consider the uploaded individual that decides to turn the entire planet into computronium or worse, turn the solar system into a Matrioshka brain. People opt out of that how?"
I consider such a premise to be so unlikely it is impossible. It is a very silly premise for three reasons.
Destroying the entire planet when there is a whole universe full of matter is insane. If insane people exist in the future post-intelligence-explosion upload-world then insane people will be dealt with thus no danger but insanity post-intelligence-explosion will be impossible, insanity is a consequence of stupidity, insanity will be extinct in the future.
Earth destructive actions are stupid: see above explanation regarding insanity: it also explains how stupidity will be obsolete.
People opt out by stating they want to opt out. I'm sure an email will suffice.
It isn't obvious to me that all wars stem from resource scarcity.
Sorry that it isn't obvious how scarcity causes war. I don't have time to explain so I will leave you with some consensual validation regarding Ray Kurzweil who seems to think the war-scarcity interrelationship is obvious:
"I've actually grown up with a history of scarcity — and wars and conflict come from scarcity — but information is quite the opposite of that." ~ Ray Kurzweil http://www.hollywoodreporter.com/risky-business/sxsw-2012-damon-lindelof-ray-kurzweil-297218
Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.