Please forgive the self-promotion but this is from Chapter 5 of my book Singularity Rising
"Successfully creating an obedient ultra-intelligence would give a country control of everything, making ultra-AI far more militarily useful than mere atomic weapons. The first nation to create an obedient ultra-AI would also instantly acquire the capacity to terminate its rivals’ AI development projects. Knowing the stakes, rival nations might go full throttle to win an ultra-AI race, even if they understood that haste could cause them to create a world destroying ultra-intelligence. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners’ Dilemma thwarts all cooperation efforts."
"Scenario 2: Generals, I [The United States President] have ordered the CIA to try to penetrate the Chinese seed AI development program, but I’m not hopeful, since the entire program consists of only twenty software engineers. Similarly, although Chinese intelligence must be using all their resources to break into our development program, the small size of our prog...
There's a fourth possibility for how the first AGI won't be government-written: goverments might overlook the potential until too late. It seems counter-intuitive to us, since we live in this idea-space, but most goverments are still in the process of noticing the internet. There probably isn't someone whose job it is to notice that uFAI is a risk, so it's entirely possible no one will.
As Paul Graham wrote in a different context
Fortunately for startups, big companies are extremely good at denial. If you take the trouble to attack them from an oblique angle, they'll meet you half-way and maneuver to keep you in their blind spot.
Goverments are even bigger, and even better at denial..
If this seems to be happening, we should probably encourage it.
I'm very uneasy as to how to properly discuss AI reasearch:
"What? You want to include the evil Chinese in your CEV??"
It seems to me that a correctly implanted CEV including only Americans (or only Chinese) would lead to a significantly better outcome than an incorrectly implemented CEV.
If AI developers are sufficiently concerned about this risk, maybe they could develop AI in a large international consortium?
In general, scenarios of rapid change are especially scary when they involve military competition. The optimization process then proceeds without much regard for human values.
http://kajsotala.fi/2012/10/technology-will-destroy-human-nature/
I would rate international cooperation highest among my pet political causes for this reason.
governments would botch the process by not realizing the risks at hand.
To be fair, so would private companies and individuals.
It's also possible that governments would use the AI for malevolent, totalitarian purposes.
It's less likely IMO that a government would launch a completely independent top secret AI project with the explicit goal of "take over and optimize existence", relying on FOOMing and first-mover advantage.
More likely, an existing highly funded arm of the government - the military, the intelligence service, the homeland depa...
Did you really just publicly post an idea that has "should we discuss this idea publicly" as a major open question?
I had two private conversations first to ask whether I should make this post, and the general consensus was that it was net good to share on LW. It seems the upside to making the topic more widely discussed among ourselves exceeds the potential downside.
I should also note that it's not completely obvious if making the idea widely known to governments is net bad -- maybe this would help curb Wild West development scenarios. But we should get careful consensus on a decision like that before moving ahead.
[Edited]
Or more simply, war and the multilateralist's curse are bad. Things being equal, we should avoid these. So I agree with promoting international cooperation. Exceptions might be if a rogue state was about to do something worse than war or the world was unified already on Vital Issues. Neither of these seem that compelling now.
Then there's that whole AI thing...
I think there's a decent chance that governments will be the first to build artificial general intelligence (AI).
Have governments historically been good at developing innovative software? Last I heard they were having trouble with CRUD websites. Just sayin'.
I guess it'd probably be better to look at DARPA's track record in particular.
Have governments historically been good at developing innovative software?
Don't forget that ARPA invented the internet, DARPA funds Boston Dynamics, the NSA was (and possibly still is) ahead of everyone else at crypto-tech, etc.
Governments are the richest entities, which lets them hire the smartest people.
Beyond a certain dollar amount, it seems that smart people typically start caring about other stuff like what they're working towards, how smart their co-workers are, etc. I'd expect that many/most top software engineers would prefer to work at Google doing good for the world making $150K than the US government doing bad for the world making $200K.
whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.
As the Wikipedians often say, "citation needed". The first "AI" was built decades ago. It evidently failed to "take over the world". Possibly someday a machine will take over the world - but it may not be the first one built.
Well, nice to know we're planning our global thermonuclear wars decades before there's any sign we'll need a global thermonuclear war for any good reason.
Goddamnit, do you people just like plotting wars!?
Summary
I think there's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.
Will governments build AI first?
AI poses a national-security threat, and unless the militaries of powerful countries are very naive, it seems to me unlikely they'd allow AI research to proceed in private indefinitely. At some point the US military would confiscate the project from Google or Goldman Sachs, if the US military isn't already ahead of them in secret by that point. (DARPA already funds a lot of public AI research.)
There are some scenarios in which private AI research wouldn't be nationalized:
It seems that both of these bad scenarios would be exacerbated by international conflict. Greater hostility means countries are more inclined to use AI as a weapon. Indeed, whoever builds the first AI can take over the world, which makes building AI the ultimate arms race. A USA-China race is one reasonable possibility.
Arms races encourage risk-taking -- being willing to skimp on safety measures to improve your odds of winning ("Racing to the Precipice"). In addition, the weaponization of AI could lead to worse expected outcomes in general. CEV seems to have less hope of success in a Cold War scenario. ("What? You want to include the evil Chinese in your CEV??") (ETA: With a pure CEV, presumably it would eventually count Chinese values even if it started with just Americans, because people would become more enlightened during the process. However, when we imagine more crude democratic decision outcomes, this becomes less likely.)
Ways to avoid an arms race
Averting an AI arms race seems to be an important topic for research. It could be partly informed by the Cold War and other nuclear arms races, as well as by other efforts at nonproliferation of chemical and biological weapons.
Apart from more robust arms control, other factors might help:
Are these efforts cost-effective?
World peace is hardly a goal unique to effective altruists (EAs), so we shouldn't necessarily expect low-hanging fruit. On the other hand, projects like nuclear nonproliferation seem relatively underfunded even compared with anti-poverty charities.
I suspect more direct MIRI-type research has higher expected value, but among EAs who don't want to fund MIRI specifically, encouraging donations toward international cooperation could be valuable, since it's certainly a more mainstream cause. I wonder if GiveWell would consider studying global cooperation specifically beyond its indirect relationship with catastrophic risks.
Should we publicize AI arms races?
When I mentioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, because then governments might take the concern more seriously and therefore start the race earlier -- giving us less time to prepare and less time to work on FAI in the meanwhile. From David Chalmers, "The Singularity: A Philosophical Analysis" (footnote 14):
When I discussed these issues with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power. One might even expect an AI arms race at some point, once the potential consequences of an intelligence explosion are registered. According to this reasoning, although AI+ would have risks from the standpoint of the US government, the risks of Chinese AI+ (say) would be far greater.