The only evidence I have is regarding my own perceptions of the world based upon my life knowledge, my extensive awareness of living. I am not trying to prove anything. I'm merely throwing my thoughts our there. You can either conclude my thoughts make sense or not. I think it is unintelligent to join the army but is my opinion correct? Personally I think it is stupid to die. People may agree my survival based definition of intelligence is correct or they may think death can be intelligent, such as the deaths of soldiers.
What type of evidence could prove "well-educated" army officers are actually dim-witted fools? Perhaps via the interconnectedness of causation it could be demonstrated how military action causes immense suffering for many innocent people thereby harming everyone because the world is more hostile place than a hypothetical world where all potential conflict was resolved intelligently via peaceful methods. The military budget detracts from the science budget thus perhaps scientific progress is delayed, although I do recognise the military does invest in sci-etch development I think the investment would be greater if out world was not based on conflict. In a world where people don't fight, there would be no need for secrecy thus greater collaboration on scientific endeavours thus progress could be quicker thus anyone supporting the army could be delaying progress in a small way thus officers are stupid because it is stupid to delay progress.
The intelligent thing is for me to draw my input into this debate to a close because it is becoming exceptionally painful for me.
You should study more game theory.
Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
Salamon & Muehlhauser (2013) list several other questions gathered from the participants of a workshop following Singularity Summit 2011, including:
These are the kinds of questions we will be tackling in this series of posts for Less Wrong Discussion, in order to improve our predictions about which direction we can nudge the future to maximize the chances of a positive intelligence explosion.