You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

NancyLebovitz comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 14 July 2013 07:22:09PM *  6 points [-]

We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:

Large-scale surveillance efforts are ethically problematic and face major political resistance, and it seems unlikely that current political opinion would support the creation of a far-reaching surveillance network for the sake of AGI risk alone. The extent to which such extremes would be necessary depends on exactly how easy it would be to develop AGI in secret. Although several authors make the point that AGI is much easier to develop unnoticed than something like nuclear weapons (McGinnis 2010; Miller 2012), cutting edge high-tech research does tend to require major investments which might plausibly be detected even by less elaborate surveillance efforts. [...]

Even under such conditions, there is no clear way to define what counts as dangerous AGI. Goertzel and Pitt (2012) point out that there is no clear division between narrow AI and AGI, and attempts to establish such criteria have failed. They argue that since AGI has a nebulous definition, obvious wide-ranging economic benefits, and potentially rich penetration into multiple industry sectors, it is unlikely to be regulated due to speculative long-term risks.

AGI regulation requires global cooperation, as the noncooperation of even a single nation might lead to catastrophe. Historically, achieving global cooperation on tasks such as nuclear disarmament and climate change has been very difficult. As with nuclear weapons, AGI could give an immense economic and military advantage to the country that develops it first, in which case limiting AGI research might even give other countries an incentive to develop AGI faster (Cade 1966; de Garis 2005; McGinnis 2010; Miller 2012) [...]

To be effective, regulation also needs to enjoy support among those being regulated. If developers working in AGI-related fields only follow the letter of the law, while privately considering all regulations as annoying hindrances and fears about AGI overblown, the regulations may prove ineffective. Thus, it might not be enough to convince governments of the need for regulation; the much larger group of people working in the appropriate fields may also need to be convinced.

While Shulman (2009) argues that the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that increased automation could make countries more self-reliant, and international cooperation considerably more difficult. AGI technology is also much harder to detect than, for example, nuclear technology is—nuclear weapons require a substantial infrastructure to develop, while AGI needs much less (McGinnis 2010; Miller 2012). [...]

Goertzel and Pitt (2012) suggest that for regulation to be enacted, there might need to be an “AGI Sputnik”—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take very long for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer. [...]

“Regulate research” proposals: Our view

Although there seem to be great difficulties involved with regulation, there also remains the fact that many technologies have been successfully subjected to international regulation. Even if one were skeptical about the chances of effective regulation, an AGI arms race seems to be one of the worst possible scenarios, one which should be avoided if at all possible. We are therefore generally supportive of regulation, though the most effective regulatory approach remains unclear. [...]

Not everyone believes that the risks involved in creating AGIs are acceptable. Relinquishment involves the abandonment of technological development that could lead to AGI. This is possibly the earliest proposed approach, with Butler (1863) writing that “war to the death should be instantly proclaimed” upon machines, for otherwise they would end up destroying humans entirely. In a much-discussed article, Joy (2000) suggests that it might be necessary to relinquish at least some aspects of AGI research, as well as nanotechnology and genetics research.

AGI relinquishment is criticized by Hughes (2001), with Kurzweil (2005) criticizing broad relinquishment while being supportive of the possibility of “fine-grained relinquishment,” banning some dangerous aspects of technologies while allowing general work on them to proceed. In general, most writers reject proposals for broad relinquishment. [...]

McKibben (2003), writing mainly in the context of genetic engineering, suggests that AGI research should be stopped. He brings up the historical examples of China renouncing seafaring in the 1400s and Japan relinquishing firearms in the 1600s, as well as the more recent decisions of abandoning DDT, CFCs, and genetically modified crops in Western countries. However, it should also be noted that Japan participated in World War II, that China now has a navy, that there are reasonable alternatives for DDT and CFCs, which probably do not exist for AGI, and that genetically modified crops are in wide use in the United States.

Hughes (2001) argues that attempts to outlaw a technology will only make the technology move to other countries. He also considers the historical relinquishment of bio- logical weapons to be a bad example, for no country has relinquished peaceful biotechnological research such as the development of vaccines, nor would it be desirable to do so. With AGI, there would be no clear dividing line between safe and dangerous research. [...]

Relinquishment proposals suffer from many of the same problems as regulation proposals, only worse. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals.

Comment author: NancyLebovitz 15 July 2013 04:51:38AM 8 points [-]

Butler (1863) writing that “war to the death should be instantly proclaimed”

I had no idea that Herbert's Butlerian Jihad might be a historical reference.

Comment author: Kaj_Sotala 16 July 2013 10:22:56AM 3 points [-]

Wow, I've read Dune several times, but didn't actually get that before you pointed it out.

Comment author: NancyLebovitz 16 July 2013 02:39:31PM 3 points [-]

It turns out that there's a wikipedia page.