Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
- Introduction (this post)
- Humanity's Efforts So Far
- A Timeline of Early Ideas and Arguments
- Questions We Want Answered
- Strategic Analysis Via Probability Tree
- Intelligence Amplification and Friendly AI
- ...
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
- Muehlhauser & Salamon, Intelligence Explosion: Evidence and Import (2013)
- Yudkowsky, AI as a Positive and Negative Factor in Global Risk (2008)
- Chalmers, The Singularity: A Philosophical Analysis (2010)
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
- What methods can we use to predict technological development?
- Which kinds of differential technological development should we encourage, and how?
- Which open problems are safe to discuss, and which are potentially dangerous?
- What can we do to reduce the risk of an AI arms race?
- What can we do to raise the "sanity waterline," and how much will this help?
- What can we do to attract more funding, support, and research to x-risk reduction and to specific sub-problems of successful Singularity navigation?
- Which interventions should we prioritize?
- How should x-risk reducers and AI safety researchers interact with governments and corporations?
- How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
- How does AI risk compare to other existential risks?
- Which problems do we need to solve, and which ones can we have an AI solve?
- How can we develop microeconomic models of WBEs and self-improving systems?
- How can we be sure a Friendly AI development team will be altruistic?
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
- How hard is it to create Friendly AI?
- What is the strength of feedback from neuroscience to AI rather than brain emulation?
- Is there a safe way to do uploads, where they don't turn into neuromorphic AI?
- How possible is it to do FAI research on a seastead?
- How much must we spend on security when developing a Friendly AI team?
- What's the best way to recruit talent toward working on AI risks?
- How difficult is stabilizing the world so we can work on Friendly AI slowly?
- How hard will a takeoff be?
- What is the value of strategy vs. object-level progress toward a positive Singularity?
- How feasible is Oracle AI?
- Can we convert environmentalists into people concerned with existential risk?
- Is there no such thing as bad publicity [for AI risk reduction] purposes?
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.
Friendly AI is incredible hard to get right and a friendly AI that is not quite friendly could create a living hell for the rest of time, increasing negative utility dramatically.
I vote for antinatalism. It should be seriously considered to create a true paperclip maximizer that transforms the universe into an inanimate state devoid of suffering. Friendly AI is simply too risky.
I think that humans are not psychological equal. Not only are there many outliers, but most humans would turn into abhorrent creatures given their own pocket universe, unlimited power and a genie. And even given our current world, if we were to remove the huge memeplex of western civilization, most people would act like stone age hunter-gatherer. And that would be bad enough. After all, violence is the major cause of death within stone age socities.
Even proposals like CEV (Coherent Extrapolated Volition) can turn out to be a living hell for a percentage of all beings. I don't expect any amount of knowledge, or intelligence, to cause humans to abandon their horrible preferences.
Eliezer Yudkowsky says that intelligence does not imply benevolence. That an artificial general intelligence won't turn out to be friendly. That we have to make it friendly. Yet his best proposal is that humanity will do what is right if we only knew more, thought faster, were more the people we wished we were and had grown up farther together. The idea is that knowledge and intelligence implies benevolence for people. I don't think so.
The problem is that if you extrapolate chaotic systems, e.g. human preferences given real world influence, small differences in initial conditions are going to yield widely diverging outcomes. That our extrapolated volition converges rather than diverges seems to be a bold prediction.
I just don't see that a paperclip maximizer burning the cosmic commons is as bad as it is currently portrayed. Sure, it is "bad". But everything else might be much worse.
Here is a question for those who think that antinatalism is just stupid. Would you be willing to rerun the history of the universe to obtain the current state? Would you be willing to create another Genghis Khan, a new holocaust, allowing intelligent life to evolve?
As Greg Egan wrote: "To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way."
If you are not willing to do that, then why are you willing to do the same now, just for much longer, by trying to colonize the universe? Are you so sure that the time to come will be much better? How sure are you?
ETA
I expect any friendly AI outcome that fails to be friendly in a certain way to increase negative utility and only a perfectly "friendly" (whatever that means, it is still questionable if the whole idea makes sense) AI to yield a positive utility outcome.
That is because the closer any given AGI design is to friendliness the more likely it is that humans will be kept alive but might suffer. Whereas an unfriendly AI in complete ignorance of human values will more likely just see humans as a material resource without having any particular incentive to keep humans around.
Just imagine a friendly AI which fails to "understand" or care about human boredom.
There are several possibilities by which SIAI could actually cause a direct increase in negative utility.
1) Friendly AI is incredible hard and complex. Complex systems can fail in complex ways. Agents that are an effect of evolution have complex values. To satisfy complex values you need to meet complex circumstances. Therefore any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways. A half-baked, not quite friendly, AI might create a living hell for the rest of time, increasing negative utility dramatically.
2) Humans are not provably friendly. Given the power to shape the universe the SIAI might fail to act altruistic and deliberately implement an AI with selfish motives or horrible strategies.
"Ladies and gentlemen, I believe this machine could create a living hell for the rest of time..."
(audience yawns, people look at their watches)
"...increasing negative utility dramatically!"
(shocked gasps, audience riots)