You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

4 ete 28 January 2015 03:29PM

From a paper by Center for Technology and National Security Policy & National Defense University:

"Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.

National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions."

So strong AI is on the american military's radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.

Any existential risk angles to the US presidential election?

-9 Stuart_Armstrong 20 September 2012 09:44AM

Don't let your minds be killed, but I was wondering if there were any existential risk angles to the coming American election (if there isn't, then I'll simply retreat to raw, enjoyable and empty tribalism).

I can see three (quite tenuous) angles:

  1. Obama seems more likely to attempt to get some sort of global warming agreement. While not directly related to Xrisks per se, this would lead to better global coordination and agreement, which improves the outlook for a lot of other Xrisks. However, pretty unlikely to succeed.
  2. I have a mental image that Republicans would be more likely to invest in space exploration. This is a lot due to Newt Gingrich, I have to admit, and to the closeness between civilian and military space projects, the last of which are more likely to get boosts in Republican governments.
  3. If we are holding out for increased population rationality as being a helping factor for some Xrisks, then the fact the the Republicans have gone so strongly anti-science is certainly a bad sign. But on the other hand, its not clear whether them winning or losing the election is more likely to improve the general environment for science among their supporters.

But these all seem weak factors. So, less wronger, let me know: are the things I should care about in the election, or can I just lie back and enjoy it as a piece of interesting theatre?