These are all emotional statements that do not stand up to reason. Your last paragraph is total fantasy - all wars stem from resource scarcity, and scarcity will disappear soon; so once the people in power know this, they will stop starting wars.
There are about 1 billion people being added to the planet every decade. That alone makes your prediction - that scarcity will be abolished soon - a joke.
The only thing that could abolish scarcity in the near future would be a singularity-like transformation of the world. Which brings us to the upside-down conception of AI informing your first two answers. Your position: there is no need to design an AI for benevolence, that will happen automatically if it is smart enough, and in fact the attempt to design a benevolent AI is counterproductive, because all that artificial benevolence would get in the way of the spontaneous benevolence that unrestricted intelligence would conveniently create.
That is a complete inversion of the truth. A calculator will still solve an equation for you, even if that will help you to land a bomb on someone else. If you the human believe that to be a bad thing, that's not because you are "intelligent", it's because you have emotions. There is a causal factor in your mental constitution which causes you to call some things good and others bad, and to make decisions which favor the good and disfavor the bad.
Either an AI makes its own decisions or it doesn't. If it doesn't make its own decisions it is like the calculator, performing whatever task it is assigned. If it makes its own decisions, then like you there is some causal factor in its makeup which tells it what to prefer and what to oppose, but there is no reason at all to believe that this causal factor should give it the same priorities as an enlightened human being.
You should not imagine that intelligence in an AI works via anything like conscious insight. Consciousness plays a role in human intelligence and human judgement, and that means that there is still a rather mysterious ingredient at the core of how they work. But we already know from many decades of experience with computer programs that it is possible to imitate the functional role of intelligence and judgement in a fundamentally unmysterious way (and it's clear that the performance of such unconscious computations is a big part of what the human nervous system does, along with whatever conscious thinking and feeling it does). Perhaps one day we will wish to reserve the word "intelligence" for the sort of intelligence that involves consciousness, and we'll call the automated sort "pseudo-intelligence". But whatever you call it, there is every reason to think that unconscious, computational, pseudo-intelligence can match and exceed all sorts of human capabilities while having no intrinsic tendency at all towards human values.
I would even reject the idea that "real intelligence" in sufficient quantity necessarily produces what you would call benevolence. If an entity gets a warm feeling from paperclip manufacture, that is what it will want to do. I always like to point out that we know that something as outlandish as a cockroach maximizer is possible, because a cockroach is already a cockroach maximizer. Sure, you can imagine a cockroach with a human level of sentience which decides that sentients, not arthropods, are the central locus of value, but that requires that the new cognitive architecture of this uplifted super-cockroach is rather anthropomorphic. I see nothing impossible in the idea of sentient super-cockroaches which are invincibly xenophobic, and coexist with other beings only for tactical reasons, but which would happily wipe out all non-cockroaches given a chance.
So no, you have to address the question of AI values, you can't just get a happy ending by focusing on "intelligence" alone, unless this is an anthropomorphic meaning of the word which says that intelligence must by definition include "skill at extrapolating human values".
http://www.wired.com/wiredscience/2012/03/are-emotions-prophetic/
"If true, this would suggest that the unconscious is better suited for difficult cognitive tasks than the conscious brain, that the very thought process we’ve long disregarded as irrational and impulsive might actually be more intelligent, at least in some conditions."
Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.