Summary

I think there's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.

Will governments build AI first?

AI poses a national-security threat, and unless the militaries of powerful countries are very naive, it seems to me unlikely they'd allow AI research to proceed in private indefinitely. At some point the US military would confiscate the project from Google or Goldman Sachs, if the US military isn't already ahead of them in secret by that point. (DARPA already funds a lot of public AI research.)

There are some scenarios in which private AI research wouldn't be nationalized:

  • An unexpected AI foom before anyone realizes what was coming.
  • The private developers stay underground for long enough not to be caught. This becomes less likely the more government surveillance improves (see "Arms Control and Intelligence Explosions").
  • AI developers move to a "safe haven" country where they can't be taken over. (It seems like the international community might prevent this, however, in the same way it now seeks to suppress terrorism in other countries.)
Each of these scenarios could happen, but it seems most likely to me that governments would ultimately control AI development.

AI arms races

Government AI development could go wrong in several ways. Probably most on LW feel the prevailing scenario is that governments would botch the process by not realizing the risks at hand. It's also possible that governments would use the AI for malevolent, totalitarian purposes.

It seems that both of these bad scenarios would be exacerbated by international conflict. Greater hostility means countries are more inclined to use AI as a weapon. Indeed, whoever builds the first AI can take over the world, which makes building AI the ultimate arms race. A USA-China race is one reasonable possibility.

Arms races encourage risk-taking -- being willing to skimp on safety measures to improve your odds of winning ("Racing to the Precipice"). In addition, the weaponization of AI could lead to worse expected outcomes in general. CEV seems to have less hope of success in a Cold War scenario. ("What? You want to include the evil Chinese in your CEV??") (ETA: With a pure CEV, presumably it would eventually count Chinese values even if it started with just Americans, because people would become more enlightened during the process. However, when we imagine more crude democratic decision outcomes, this becomes less likely.)

Ways to avoid an arms race

Averting an AI arms race seems to be an important topic for research. It could be partly informed by the Cold War and other nuclear arms races, as well as by other efforts at nonproliferation of chemical and biological weapons.

Apart from more robust arms control, other factors might help:

  • Improved international institutions like the UN, allowing for better enforcement against defection by one state.
  • In the long run, a scenario of global governance (i.e., a Leviathan or singleton) would likely be ideal for strengthening international cooperation, just like nation states reduce intra-state violence.
  • Better construction and enforcement of nonproliferation treaties.
  • Improved game theory and international-relations scholarship on the causes of arms races and how to avert them. (For instance, arms races have sometimes been modeled as iterated prisoner's dilemmas with imperfect information.)
  • How to improve verification, which has historically been a weak point for nuclear arms control. (The concern is that if you haven't verified well enough, the other side might be arming while you're not.)
  • Moral tolerance and multicultural perspective, aiming to reduce people's sense of nationalism. (In the limit where neither Americans nor Chinese cared which government won the race, there would be no point in having the race.)
  • Improved trade, democracy, and other forces that historically have reduced the likelihood of war.

Are these efforts cost-effective?

World peace is hardly a goal unique to effective altruists (EAs), so we shouldn't necessarily expect low-hanging fruit. On the other hand, projects like nuclear nonproliferation seem relatively underfunded even compared with anti-poverty charities.

I suspect more direct MIRI-type research has higher expected value, but among EAs who don't want to fund MIRI specifically, encouraging donations toward international cooperation could be valuable, since it's certainly a more mainstream cause. I wonder if GiveWell would consider studying global cooperation specifically beyond its indirect relationship with catastrophic risks.

Should we publicize AI arms races?

When I mentioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, because then governments might take the concern more seriously and therefore start the race earlier -- giving us less time to prepare and less time to work on FAI in the meanwhile. From David Chalmers, "The Singularity: A Philosophical Analysis" (footnote 14):

When I discussed these issues with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power. One might even expect an AI arms race at some point, once the potential consequences of an intelligence explosion are registered. According to this reasoning, although AI+ would have risks from the standpoint of the US government, the risks of Chinese AI+ (say) would be far greater.

We should take this information-hazard concern seriously and remember the unilateralist's curse. If it proves to be fatal for explicitly discussing AI arms races, we might instead encourage international cooperation without explaining why. Fortunately, it wouldn't be hard to encourage international cooperation on grounds other than AI arms races if we wanted to do so.

ETA: Also note that a government-level arms race might be preferable to a Wild West race among a dozen private AI developers where coordination and compromise would be not just difficult but potentially impossible.

New to LessWrong?

New Comment
144 comments, sorted by Click to highlight new comments since: Today at 11:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Please forgive the self-promotion but this is from Chapter 5 of my book Singularity Rising

"Successfully creating an obedient ultra-intelligence would give a country control of everything, making ultra-AI far more militarily useful than mere atomic weapons. The first nation to create an obedient ultra-AI would also instantly acquire the capacity to terminate its rivals’ AI development projects. Knowing the stakes, rival nations might go full throttle to win an ultra-AI race, even if they understood that haste could cause them to create a world destroying ultra-intelligence. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners’ Dilemma thwarts all cooperation efforts."

"Scenario 2: Generals, I [The United States President] have ordered the CIA to try to penetrate the Chinese seed AI development program, but I’m not hopeful, since the entire program consists of only twenty software engineers. Similarly, although Chinese intelligence must be using all their resources to break into our development program, the small size of our prog... (read more)

1Brian_Tomasik10y
Thanks, James! Yes, things could get ugly. :(

There's a fourth possibility for how the first AGI won't be government-written: goverments might overlook the potential until too late. It seems counter-intuitive to us, since we live in this idea-space, but most goverments are still in the process of noticing the internet. There probably isn't someone whose job it is to notice that uFAI is a risk, so it's entirely possible no one will.

As Paul Graham wrote in a different context

Fortunately for startups, big companies are extremely good at denial. If you take the trouble to attack them from an oblique angle, they'll meet you half-way and maneuver to keep you in their blind spot.

Goverments are even bigger, and even better at denial..

If this seems to be happening, we should probably encourage it.

5Brian_Tomasik10y
Hmm, it seems like government AI development might be preferable to a Wild West of private groups. At least in a US-China arms race you have just two parties and so have a shot at treaties and iterated-prisoner's-dilemma (IPD) game dynamics. With unregulated private developers, you have a multiplayer prisoner's dilemma, making IPD-type cooperation or other forms of coordination much harder.
4dspeyer10y
True, but governments have some really scary terminal values.
1hyporational10y
What are they, how do you know them, and how certain are you?
3DanArmak10y
I don't know about most governments, but at least some governments are well into the process of achieving full control of the Internet and are also using it to do things they couldn't do before it existed.

I'm very uneasy as to how to properly discuss AI reasearch:

  • One can't warn of the dangers of AI, without bragging of their power. Will the warning increase or decrease the probability of UAI?
  • One can't advise responsible people not to attempt to make an AI, without increasing the risk that the first AI will be made by someone irresponsible. But what are the chances that an AI made with good intentions destroys humanity anyways?
  • AI research seems to correspond with a prisoner's dilemma, so I wouldn't expect cooperation.
  • I don't know whether it is a better
... (read more)
2Brian_Tomasik10y
These are tricky issues. :) Fortunately, many real-world scenarios are iterated prisoner's dilemmas (e.g., moving ahead with your country's AI research faster than what was agreed upon). We can also set up side payments against defection, such as by an international governing body. And changing people's views about the payoffs (such as by encouraging an internationalist outlook) could make the game no longer a prisoner's dilemma. In general, this highlights the importance of improving theory of, institutions for, and inclinations toward compromise.
[-][anonymous]10y40

"What? You want to include the evil Chinese in your CEV??"

It seems to me that a correctly implanted CEV including only Americans (or only Chinese) would lead to a significantly better outcome than an incorrectly implemented CEV.

3DanArmak10y
It also seems to me that a correctly implemented CEV including on myself and a few friends and trusted figures of authority would lead to a much better outcome than a CEV including all Americans or all Chinese.
1Brian_Tomasik10y
Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value. Isaac_Davis made a good point that a true CEV might not depend that sensitively on what country it was seeded from. The bigger danger I had in mind would be the (much more likely) outcome of imperfect CEV, such as regular democracy. In that case, excluding the Chinese could lead to more parochial outcomes, and the Chinese would then also have more reason to worry about a US AI.
1DanArmak10y
That's my point. If you're funding a small-team top secret AGI project, you can keep your seed community small too; you don't need to compromise. Especially if you're consciously racing to finish your project before any rivals, you won't want to include those rivals in your CEV.
0Brian_Tomasik10y
Well, what does that imply your fellow prisoners in this one-shot prisoner's dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.
5DanArmak10y
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
2Brian_Tomasik10y
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl's paper suggest lie detection and other advanced transparency measures as possibilities, but it's unclear if governments will tolerate this even when the future of the galaxy is at stake.
1Brian_Tomasik10y
Good point. :) With a pure CEV, it might converge to roughly the same thing. Where it could matter more is with a much more crude form of democracy determining the AI's values. Also, if you're in a hurry to get the AI out the door before the other guy does, you don't have a lot of time for CEV or even for regular democratically made choices.

If AI developers are sufficiently concerned about this risk, maybe they could develop AI in a large international consortium?

2oooo10y
How much would AI developers be willing to sacrifice? They may be sufficiently concerned to at this risk as explained, but motivated and well-funded organizations (or governments) should have no problem attempting to influence, persuade or convert a fraction of AI developers to think otherwise. I wonder if global climate change can be used as an analogy highlighting what some climate scientists are willing to publish due to funding and/or other incentives beyond scientific inquiry.
[-][anonymous]10y20

In general, scenarios of rapid change are especially scary when they involve military competition. The optimization process then proceeds without much regard for human values.

http://kajsotala.fi/2012/10/technology-will-destroy-human-nature/

I would rate international cooperation highest among my pet political causes for this reason.

[This comment is no longer endorsed by its author]Reply

governments would botch the process by not realizing the risks at hand.

To be fair, so would private companies and individuals.

It's also possible that governments would use the AI for malevolent, totalitarian purposes.

It's less likely IMO that a government would launch a completely independent top secret AI project with the explicit goal of "take over and optimize existence", relying on FOOMing and first-mover advantage.

More likely, an existing highly funded arm of the government - the military, the intelligence service, the homeland depa... (read more)

0Brian_Tomasik10y
Yes, perhaps more so. :) The main point in the post was that risks of botching the process increase in a competitive scenario where you're pressed for time.

Did you really just publicly post an idea that has "should we discuss this idea publicly" as a major open question?

0Kawoomba10y
It's a post on LW, not a concerted effort to publicize said ideas in the respective government circles. The quip (if you meant it as such) may appear obvious, but is inapt on a second-order approximation.

I had two private conversations first to ask whether I should make this post, and the general consensus was that it was net good to share on LW. It seems the upside to making the topic more widely discussed among ourselves exceeds the potential downside.

I should also note that it's not completely obvious if making the idea widely known to governments is net bad -- maybe this would help curb Wild West development scenarios. But we should get careful consensus on a decision like that before moving ahead.

[-][anonymous]10y00

[Edited]

Or more simply, war and the multilateralist's curse are bad. Things being equal, we should avoid these. So I agree with promoting international cooperation. Exceptions might be if a rogue state was about to do something worse than war or the world was unified already on Vital Issues. Neither of these seem that compelling now.

Then there's that whole AI thing...

I think there's a decent chance that governments will be the first to build artificial general intelligence (AI).

Have governments historically been good at developing innovative software? Last I heard they were having trouble with CRUD websites. Just sayin'.

I guess it'd probably be better to look at DARPA's track record in particular.

Have governments historically been good at developing innovative software?

Don't forget that ARPA invented the internet, DARPA funds Boston Dynamics, the NSA was (and possibly still is) ahead of everyone else at crypto-tech, etc.

4DanArmak10y
Governments are the richest entities, which lets them hire the smartest people. And they are the most powerful entities, which lets them stop rivals and nationalize private research efforts. They are also the best-informed entities on many subjects.

Governments are the richest entities, which lets them hire the smartest people.

Beyond a certain dollar amount, it seems that smart people typically start caring about other stuff like what they're working towards, how smart their co-workers are, etc. I'd expect that many/most top software engineers would prefer to work at Google doing good for the world making $150K than the US government doing bad for the world making $200K.

4DanArmak10y
This is a good point, but I'm not sure how much of that is driven by "doing good for the world" and how much by "working at Google"; so governments might try to use private contractors. Also, it's not entirely obvious that the average Google project improves the world more than the average government program (that requires top programmers).
-2EGarrett10y
Governments don't earn their money through market savvy, so they tend lack the experience and skill to recognize talent when they see it. Without that, it becomes very difficult to hire the most capable people, even when you have large amounts of money.
2EHeller10y
A theoretical argument, but does it hold empirically? In my experience, the most capable scientists all work for government organizations.
-2EGarrett10y
Hi Heller. How are we determining that they are the most capable? I feel like there are many ways to measure, and I think science includes the type of studying that allows people to create good websites and computer software and hardware, and good music and movies and so on (I think all these things can be researched formally). With that in mind, I'd say that the most capable science is being done by private companies. Hopefully that makes sense. This isn't intended to muddle the term "scientist."
3EHeller10y
Thats so incredibly broad as to be a useless definition of "scientist." Lets use "scientist" as someone engaged in basic research oriented around the natural word (as opposed to an engineer involved in more applied research). My categorization isn't perfect, but your grouping puts musicians,actors, programmers, actuaries,engineers,etc all in to an umbrella category of "scientist." Most fundamental research happens at public institutions under public grants (even the private institutions get massive public subsidy). Also, as a matter of public-goods, economic theory would expect private institutions to be systematically underinvested in basic research.
-2EGarrett10y
I'm not suggesting that everyone who does those things is a scientist, but that those things CAN be studied scientifically. For example, not all singers are scientists, but the people who created auto-tune probably did so through scientific research, and, at least in an objective note-matching sense, it makes singers better.
1Brian_Tomasik10y
Take weapons systems as an example. Few would claim that the government has been a failure at building nuclear arsenals, conventional-weapons fleets, remote weapons-control systems, etc. Of course, it may do so inefficiently, and the US military may sometimes perform poorly (e.g., Vietnam, Iraq), but on the whole nobody in the world would dare go up against it. The same could be true for a government AGI.
1EGarrett10y
The US military has certainly developed some extremely powerful weapons. But as you said and I agree, we have understand if it was done more efficiently or capably then a market would've produced, and I'm not sure if there's a good example to use for weapons development. Maybe we should look at government space programs compared to private spaceflight?
0Brian_Tomasik10y
Yes, companies can sometimes produce technologies at lower cost. But my thinking is that when the technology is as much of a security threat as AGI, governments would use their power to prohibit private development of it (just as governments prevent private selling of advanced military weapons). Combined with the fact that governments are not totally ineffective, this makes it plausible that the first AGI will be built by a government. Of course, governments might not be first, especially if private companies are fast enough to outrun government prohibitions.
2Lumifer10y
This assumes that the government recognizes AGI development as a security threat which is not a given.
0Brian_Tomasik10y
Agreed. It's an interesting question whether we want governments to realize it or not. I lean toward the "yes" side (in general, it seems better when governments understand catastrophic risks), but we should debate the question more before taking action.
1Desrtopa10y
This may sometimes be the case, but note that "market savvy" isn't necessary to gain useful experience in recognizing skill in prospective employees. You just need effective feedback mechanisms that tell you whether or not you're doing a good job. May government institutions operate in the absence of such feedback mechanisms, but not all.
-1EGarrett10y
I think the question in this case is whether feedback mechanisms outside of proper free market forces should be labeled "effective," since many of us consider the accuracy of free market feedback to be light-years beyond that of rough individual human judgement. (apologies if this is sliding into an inappropriate political discussion)
1Desrtopa10y
Free market feedback is generally strong, but often subject to perverse incentives. There are matters I would be more comfortable leaving in the hands of free market than the government, and other matters where I would be much less comfortable seeing them handled by the free market. I think that a "general case" where the balance clearly lies in favor of one or the other is probably mythical.
-1EGarrett10y
I've read and seen some really thought-provoking material on ways in which the free market could supposedly do a lot of traditional government roles. There are also sites like judge.me which are testing some of it out, including private contract enforcement and law. So I wouldn't automatically say that government is better at certain things. What kind of perverse incentives are you concerned with? There is certainly some incentive to do things like using force and deception to get money or resources, but the market also includes a mechanism for punishing this and disincentivizing that type of behavior, and I'd say the same incentive exists in governments.
3Desrtopa10y
There are a huge number of potential perverse incentives idiosyncratic to the specific cases; again, I don't think this is an issue where there's a practical "general case" address. If you want to ask me to, say, name a few specific cases, I could do that, but it should be with the understanding that they shouldn't be taken as representative examples whereby, if we solve them, we can generalize those methods to all remaining perverse incentive scenarios. I can definitely think of examples off the top of my head that require neither the use of force of deception. The government is also subject to some perverse incentives, some of which do not apply to markets, but in some cases it fares better because, while businesses are required to keep their own interests at the bottom line, and in some situations those interests can diverge significantly from those of their consumer base, the intended purpose of the government is to serve the populace. This book has a reasonable concentration of examples of businesses operating under perverse incentives, but also some examples of free market enterprises offering services with higher costs and lower efficiency than government bodies offering the same services, and might be worthwhile food for thought.
0EGarrett10y
Thanks for the link. Of course, I felt like it would be easiest to discuss some quick examples from your point of view, as I don't want to mischaracterize you. But if you'd prefer not to that's fine. As I said, I don't want to get too far into political arguments anyway.
0oooo10y
Judge.me was shutdown in July 2013, but evidently Net-Arb is another service carrying on the Judge.me torch and focusing primarily on internet arbitration.
0EGarrett10y
Thanks for the updated information.
4Brian_Tomasik10y
:) The intuition here is that AI is unlike normal software in that it's a national (indeed, world) security threat. Governments historically have had monopolies on weapons of mass destruction and have been the primary developers thereof. AI is somewhat different in being inherently dual-use and a goal that many people eventually want to happen in some form (whereas nobody prefers for nuclear, chemical, or biological weapons to be developed except for their strategic utility).
2ygert10y
People are glad that there is such a thing as nuclear power, so nuclear technology should probably also be classified as dual-use. However, your example of chemical and biological weapons as things no one wants still stand.
4Nornagest10y
It's possible to create simple chemical weapons fairly easily with the resources of a high-school chemistry lab; that's about as dual-use as it gets. Nerve agents and the like are more complicated, but still feasible without exotic infrastructure if you can get your hands on the precursors. The problem is more that they don't actually work all that well; the de-facto moratorium on their use has as much to do with practical problems as moral. Aum Shinrikyo's 1995 sarin attack on the Tokyo subway, for example, caused about the casualties of a small to medium-sized bombing and took far more coordination and technical expertise. Biological weapons hitherto have been in a different category, but that might not last as cheap bioengineering tools become available; I don't know enough about that field to comment authoritatively, though. On the other hand, I expect nuclear technology to grow less dual-use in the near future, as more reactor designs come online that require less fuel enrichment and don't generate plutonium.
3Brian_Tomasik10y
Yes, though people also want better living through chemistry and better health through biotech. I guess my thought was that with AI, there's not obviously a distinction at all between the military vs. civilian forms. A civilian AI is almost necessarily also a world-security hazard just by its existence, whereas nuclear power plants need some work to be converted to bombs.
3Kaj_Sotala10y
The distinction also feels very thin with some biotech research: consider e.g. the various debates of whether to publish the genome of various diseases. Arguably, there it might be easier to use that information to do damage than to do good: to do damage, you only need to synthesize the pathogen, whereas figuring out how to use the genome to defend against it better takes more effort.
1ygert10y
True. My point was that the technology for nuclear weapons was inexorably tied with the technology for civilian nuclear power. You can't have the technology for one without the other. (I will admit that this is not exactly the same thing as not being able to have one without another, but it's pretty close.) And you do make a good point on the topic of chemistry and biotech also having ties in that direction.

whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.

As the Wikipedians often say, "citation needed". The first "AI" was built decades ago. It evidently failed to "take over the world". Possibly someday a machine will take over the world - but it may not be the first one built.

2Brian_Tomasik10y
In the opening sentence I used the (perhaps unwise) abbreviation "artificial general intelligence (AI)" because I meant AGI throughout the piece, but I wanted to be able to say just "AI" for convenience. Maybe I should have said "AGI" instead.
-5timtyler10y
[-][anonymous]10y-20

Well, nice to know we're planning our global thermonuclear wars decades before there's any sign we'll need a global thermonuclear war for any good reason.

Goddamnit, do you people just like plotting wars!?

0Jiro10y
You're equivocating on the word "need". When one refers to needing most things, it means we're better off with them than with not having them. But for global thermonuclear war, the comparison is not to having no war; the comparison is to having a war where other parties are the ones with all the nukes. Furthermore, describing many actions in terms of "need" is misleading. "Needing" something normally implies a naive model where if you want X to happen, you are willing to do X and vice versa. Look up everything that has been written here about precommitting; nuclear war is a case of precommitting and precommitting to something can actually reduce its likelihood.
0[anonymous]10y
No, we're not talking about that kind of war. We're not talking about a balance of power that can be maintained through anti-proliferation laws (though I certainly support international agreements to not build AI and contribute to a shared, international FAI project!). If we get to the point of an American FAI versus a Chinese FAI, the two AIs will negotiate a rational compromise to best suit the American and Chinese CEVs (which won't even be that different compared to, say, Clippy). Whereas if we get one UFAI that manages to go FOOM, it doesn't fucking matter who built it: we're all dead. So the issue is not, "You don't build UFAI and I won't build UFAI." The issue is simply: don't build UFAI, ever, at all. All humans have rational reason to buy this proposition. There are actually two better options here than preemptively plotting an existential-risk-grade war. They are not dichotomous and I personally support employing both. * Plot an international treaty to limit the creation of FOOM-able AIs outside a strict framework of cooperative FAI development that involves a broad scientific community and limits the resources needed for rogue states or organizations to develop UFAI. This favors the singleton approach advocated by Nick Bostrom and Eliezer Yudkowsky, and also avoids thermonuclear war. An Iraq-style conventional war of regime change is already a severe enough threat to bend most nations' interests in favor of either cooperative FAI development or just not developing AI. * For the case of a restricted-domain FAI being created, encourage global economic cooperation and cultural interaction, to ensure that whether the first FAI is Chinese or American, it will infer values over humans of a more global rather than parochial culture and orientation (though I had thought Eliezer's cognitivist approach to human ethics was meant to be difficult to corrupt using mere cultural brainwashing). That leaves the following military options: in case of a regime showin
1Lumifer10y
Oh, how... rebel of you. May I recommend less drama?
0[anonymous]10y
Frankly, when someone writes a post recommending global thermonuclear war as a possible option, that's my line. My suggested courses of action are noticeably less melodramatic and noticeably closer to the plain, boring field of WW3-prevention. But I gave you the upvote anyway for calling out my davkanik tendencies.
2Nornagest10y
I'm genuinely confused. There's an analogy to a nuclear arms race running through the OP, but as best I can tell it's mostly linking AI development controls to Cold War-era arms control efforts -- which seems reasonable, if inexact. Certainly it's not advocating tossing nukes around. Can you point me to exactly what you're responding to?
1[anonymous]10y
Ah, I seem to be referring to James' excerpt from his book rather than the OP:
1Nornagest10y
Oh, that makes more sense. I'd assumed, since this thread was rooted under the OP, that you were responding to that. After reading James's post, though, I don't think it's meant to be treated as comprehensive, much less prescriptive. He seems to be giving some (fictional) outlines of outcomes that could arise in the absence of early and aggressive cooperation on AI development; the stakes at that point are high, so the consequences are rather precipitous, but this is still something to avoid rather than something to pursue. Reading between the lines, in fact, I'd say the policy implications he's gesturing towards are much the same as those you've been talking about upthread. On the other hand, it's very early to be hashing out scenarios like this, and doing so doesn't say anything particularly good about us from a PR perspective. It's hard enough getting people to take AI seriously as a risk, full stop; we don't need to exacerbate that with wild apocalyptic fantasies just yet.
2[anonymous]10y
This bears investigating. I mean, come on, the popular view of AI among the masses is that All AI Is A Crapshoot, that every single time it will end in the Robot Wars. So how on Earth can it be difficult to convince people that UFAI is an issue? I mean, hell, if I wanted to scare someone, I'd just point out that no currently-known model of AGI includes a way to explicitly specify goals desirable to humans. That oughtta scare folks.
3TheOtherDave10y
I've talked to a number of folks who conclude that AIs will be superintelligent and therefore will naturally derive and follow the true morality (you know, the same one we do), and dismiss all that Robot Wars stuff as television crap (not unreasonably, as far as it goes).
2[anonymous]10y
Which one's that, eh ;-)? Are these religious people? I mean, come on, where do you get moral realism if not from some kind of moral metaphysics? Certainly it's not unreasonable. One UFAI versus humans with no FAI to fight back, I wouldn't call anything so one-sided a war. (And I'm sooo not making the Dalek reference that I really want to. Someone else should do it.)
3TheOtherDave10y
I've never had that conversation with explicitly religious people, and moral realism at the "some things are just wrong and any sufficiently intelligent system will know it" level is hardly unheard of among atheists.
5[anonymous]10y
Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don't have some kind of religious/faith-based metaphysics operating, you can't be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn't make any sense. Oh well.
6Brian_Tomasik10y
Moral realism makes no more sense with religion. As CS Lewis said: "Nonsense does not cease to be nonsense when we put the words 'God can' before it."
4[anonymous]10y
Disagreed, depending on your definition of "morality". A sufficiently totalitarian God can easily not only decide what is moral but force us to find the proper morality morally compelling. (There is at least one religion that actually believes something along these lines, though I don't follow it.)
2Brian_Tomasik10y
Ok, that definition is not nonsense. But in that case, it could happen without God too. Maybe the universe's laws cause people to converge on some morality, either due to the logic of evolutionary cooperation or another principle. It could even be an extra feature of physics that forces this convergence.
3hyporational10y
Perhaps Eli and you are talking past each other a bit. A certain kind of god would be strong evidence for moral realism, but moral realism wouldn't be strong evidence for a god of any kind.
-2[anonymous]10y
Well sure, but if you're claiming physics enforces a moral order, you've reinvented non-theistic religion.
4hyporational10y
Why? Beliefs that make no sense are very common. Atheists are no exception.
0[anonymous]10y
Actually, if anything, I'd call it the reverse. Religious people know where we're making unevidenced assumptions.
2TheOtherDave10y
You talk as though religion were something that appeared in people's minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.
0[anonymous]10y
Hmm. Fair point. "We imagine the universe as we are."
0passive_fist10y
Might I suggest you take a look at the metaethics sequence? This position is explained very well.
1[anonymous]10y
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a "perfect" reification. That's not moral realism ("morality is a part of the universe itself, external to human beings"), that's objective moral-cognitivism ("morality is a measurable part of us but has no other grounding in external reality").
-8TheAncientGeek10y
-2TheAncientGeek10y
Which position? The metaethics sequence isn't clearly re4alist, or anything else.
-2TheAncientGeek10y
That would be epistemology... There are rationally acceptable subjects that don't use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology. However, if this epistemological-sounding complaint is actually about metaphysics, ie "what experiment could you perform to detect a non-natural moral property", the answer is that moral realists have to suppose the existence of special psychological faculty.
-8Eugine_Nier10y
1nshepperd10y
Pedantic complaint about language: moral realism simply says that moral claims do state facts, and at least some of them are true. It takes further assumptions ("internalism") to claim that these moral facts are universally compelling in the sense of moving any intelligent being to action. (I personally believe the latter assumption to be nonsense, hence AGI is a really bad idea.) Granted, I don't know of any nice precise term for that position that all intelligent beings must necessarily do the right thing, possibly because it's so ridiculous no philosopher would profess it publicly in such words. On the other hand, motivational internalism would seem to be very intuitive, judging by the pervasiveness of the view that AI doesn't pose any risk.
-5TheAncientGeek10y
-2TheAncientGeek10y
From abstract reason or psychological facts, or physical facts, or a mixture. There is a subject called economics. It tells you how to achieve certain goals, such as maximising GDP. It doesn't do that by corresponding to a metaphysical Economics Object, it does that with a mixture of theoretical reasoning and examination of evidence. There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....
1[anonymous]10y
Well there's the problem: ethics does not automatically start out with a happiness-utilitarian goal. Lots of extent ethical systems use other terminal goals. For instance...
-3TheAncientGeek10y
"Such as"
1[anonymous]10y
Sufficient rationality will tell you how to maximize any goal, once you can clearly define the goal.
2TheAncientGeek10y
Rationality is quite helpful for clarifying goals too.
0polymathwannabe10y
Problem is, economics is not a science: http://www.theatlantic.com/business/archive/2013/04/the-laws-of-economics-dont-exist/274901/
0TheAncientGeek10y
Of course economics doesn't have the well-established laws of physical science: it wouldn't be much of an analogy for ethics if it did.But having an epistemology that doens't work very well is not the same as having an epistemology that requires non-natural entities.
3polymathwannabe10y
The main problem with economics is not its descriptive, but its predictive power. Too many of economics' calculations need to suppose that everyone will behave rationally, which regular people can't be trusted to do. Same problem with politics.
2Nornagest10y
Well, there's a couple prongs to that. For one thing, it's tagged as fiction in most people's minds, as might be suggested by the fact that it's easily described in trope. That's bad enough by itself. Probably more importantly, though, there's a ferocious tendency to anthropomorphize this sort of thing, and you can't really grok UFAI without burning a good bit of that tendency out of your head. Sure, we ourselves aren't capital-F Friendly, but we're a far cry yet from a paperclip maximizer or even most of the subtler failures of machine ethics; a jealous or capricious machine god is bad, but we're talking Screwtape here, not Azathoth. HAL and Agent Smith are the villains of their stories, but they're human in most of the ways that count. You may also notice that we tend to win fictional robot wars.
6ialdabaoth10y
Also, note that the tropes tend to work against people who say "we have a systematic proof that our design of AI will be Friendly". In fact, in general the only way a fictional AI will turn out 'friendly' is if it is created entirely by accident - ANY fictional attempt to intentionally create a Friendly AI will result in an abomination, usually through some kind of "dick Genie" interpretation of its Friendliness rules.
3Nornagest10y
Yeah. I think I'd consider that a form of backdoor anthropomorphization by way of vitalism, though. Since we tend to think of physically nonhuman intelligences as cognitively human, and since we tend to think of human ethics and cognition as something sacred and ineffable, fictional attempts to eff them tend to be written as crude morality plays. Intelligence arising organically from a telephone exchange or an educational game or something doesn't trigger the same taboos.
2ialdabaoth10y
The currently fashionable descriptor is "metacontrarianism" - you might get better responses if you phrase your objection in that way. (man, I LOVE when things go factorially N-meta)
2[anonymous]10y
I'm not actually sure who the metacontrarian is here.
2ialdabaoth10y
Hence my delight in the factorial metaness.
0Lumifer10y
Looks like you (emphasis mine): and You can be a contrarian with less drama perfectly well :-)
0[anonymous]10y
I would note that "we are all in the process of dying horribly" is actually a pretty dramatic situation. At the moment, actually, I'm not banking on ever seeing it: I think actual AI creation requires such expertise and has such extreme feasibility barriers that successfully building a functioning software-embodied optimization process tends to require such group efforts that someone thinks hard about what the goal system is.
0Lumifer10y
Given that "we are all in the process of dying" is true for all living beings for as long as living beings existed, I don't see anything dramatic in here. As to "horribly", what is special about today's "horror" compared to, say, a hundred years ago?
0[anonymous]10y
I hadn't meant today. I had meant in the case of a UFAI getting loose. That's one of those rare situations where you should consider yourself assuredly dead already and start considering how you're going to kill the damn UFAI, whatever that costs you. Whereas in the present day, I would not employ "nuke it from orbit; only way to be sure" solutions to, well, anything.
0Brian_Tomasik10y
Thanks, Eli. You make some good points amidst the storm. :) I think the scenario James elaborated was meant to be a fictional portrayal of a bad outcome that we should seek to avoid. That it was pasted without context may have given the impression that he actually supported such a strategy. I mostly agree with your bullet points. Working toward cooperation and global unification, especially before things get ugly, is what I was suggesting in the opening post. Even if uFAI would destroy its creators, people still have incentive to skimp on safety measures in an arms-race situation because they're trading off some increased chance of winning against some increased chance of killing everyone. If winning the race is better than letting someone else win, then you're willing to tolerate some increased risk of killing everyone. This is why I suggested promoting internationalist perspective as one way to improve the situation -- because then individual countries would care less about winning the race. BTW, it's not clear that Clippy would kill us all. Like in any other struggle for power, a newly created Clippy might compromise with humans by keeping them alive and giving them some of what they want. This is especially likely if Clippy is risk averse.
1[anonymous]10y
Interesting. So there are backup safety strategies. That's quite comforting to know, actually. Oh thank God. I'd like to apologize for my behavior, but to be honest this community is oftentimes over my Poe's Law Line where I can no longer actually tell if someone is acting out a fictional parody of a certain idea or actually believes in that idea. Next time I guess I'll just assign much more probability to the "this person is portraying a fictional hypothetical" notion. Sorry, could you explain? I'm not seeing it. That is, I'm not seeing how increasing the probability that your victory equates with your own suicide is better than letting someone else just kill you. You're dead either way.
0Brian_Tomasik10y
No worries. :-) Say that value(you win) = +4, value(others win) = +2, value(all die) = 0. If you skimp on safety measures for yourself, you can increase your probability of winning relative to others, and this is worth some increased chance of killing everyone. Let me know if you want further clarification. :) The final endpoint of this process will be a Nash equilibrium, as discussed in "Racing to the Precipice," but what I described could be one step toward reaching that equilibrium.