Note: This was automatically imported from the AI Impacts blog and neither the footnotes nor the images imported properly. For now, I recommend reading it on our site.

At AI Impacts, we’ve been looking into how people, institutions, and society approach novel, powerful technologies. One part of this is our technological temptations project, in which we are looking into cases where some actors had a strong incentive to develop or deploy a technology, but chose not to or showed hesitation or caution in their approach. Our researcher Jeffrey Heninger has recently finished some case studies on this topic, covering geoengineering, nuclear power, and human challenge trials.

This document summarizes the lessons I think we can take from these case studies. Much of it is borrowed directly from Jeffrey’s written analysis or conversations I had with him, some of it is my independent take, and some of it is a mix of the two, which Jeffrey may or may not agree with. All of it relies heavily on his research.

The writing is somewhat more confident than my beliefs. Some of this is very speculative, though I tried to flag the most speculative parts as such.

Summary

Jeffrey Heninger investigated three cases of technologies that create substantial value, but were not pursued or pursued more slowly

The overall scale of value at stake was very large for these cases, on the order of hundreds of billions to trillions of dollars. But it’s not clear who could capture that value, so it’s not clear whether the temptation was closer to $10B or $1T.

Social norms can generate strong disincentives for pursuing a technology, especially when combined with enforceable regulation.

Scientific communities and individuals within those communities seem to have particularly high leverage in steering technological development at early stages.

Inhibiting deployment can inhibit development for a technology over the long term, at least by slowing cost reductions.

Some of these lessons are transferable to AI, at least enough to be worth keeping in mind.

Overview of cases

  1. Geoengineering could feasibly provide benefits of $1-10 trillion per year through global warming mitigation, at a cost of $1-10 billion per year, but actors who stand to gain the most have not pursued it, citing a lack of research into its feasibility and safety. Research has been effectively prevented by climate scientists and social activist groups.
  2. Nuclear power has proliferated globally since the 1950s, but many countries have prevented or inhibited the construction of nuclear power plants, sometimes at an annual cost of tens of billions of dollars and thousands of lives. This is primarily done through legislation, like Italy’s ban on all nuclear power, or through costly regulations, like safety oversight in the US that has increased the cost of plant construction in the US by a factor of ten.
  3. Human challenge trials may have accelerated deployment of covid vaccines by more than a month, saving many thousands of lives and billions or trillions of dollars. Despite this, the first challenge trial for a covid vaccine was not performed until after several vaccines had been tested and approved using traditional methods. This is consistent with the historical rarity of challenge trials, which seems to be driven by ethical concerns and enforced by institutional review boards.

Scale

The first thing to notice about these cases is the scale of value at stake. Mitigating climate change could be worth hundreds of billions or trillions of dollars per year, and deploying covid vaccines a month sooner could have saved many thousands of lives. While these numbers do not represent a major fraction of the global economy or the overall burden of disease, they are large compared to many relevant scales for AI risk. The world’s most valuable companies have market caps of a few trillion dollars, and the entire world spends around two trillion dollars per year on defense. In comparison, annual funding for AI is on the order of $100B.1

Comparison between the potential gains from mitigating global warming and deploying covid vaccines faster. These items were somewhat arbitrarily chosen, and most of the numbers were not carefully researched, but they should be in the right ballpark.

Setting aside for the moment who could capture the value from a technology and whether the reasons for delaying or forgoing its development are rational or justified, I think it is worth recognizing that the potential upsides are large enough to create strong incentives.

Social norms

My read on these cases is that a strong determinant for whether a technology will be pursued is social attitudes toward the technology and its regulation. I’m not sure what would have happened if Pfizer had, in defiance of FDA standards and medical ethics norms, infected volunteers with covid as part of their vaccine testing, but I imagine it would have been more severe than fines or difficulty obtaining FDA approval. They would have lost standing in the medical community and possibly been unable to continue existing as a company. This goes similarly for other technologies and actors. Building nuclear power plants without adhering to safety standards is so far outside the range of acceptable actions that even suggesting it as a strategy for running a business or addressing climate change is a serious risk to reputation for a CEO or public official. An oil company executive who finances a project to disperse aerosols into the upper atmosphere to reduce global warming and protect his business sounds like a Bond movie villain.

This is not to suggest that social norms are infinitely strong or that they are always well-aligned with society’s interests. Governments and corporations will do things that are widely viewed as unethical if they think they can get away with it, for example, by doing it in secret.2 And I think that public support for our current nuclear safety regime is gravely mistaken. But strong social norms, either against a technology or against breaking regulations do seem able, at least in some cases, to create incentives strong enough to constrain valuable technologies.

The public

The public plays a major role in defining and enforcing the range of acceptable paths for technology. Public backlash in response to early challenge trials set the stage for our current ethics standards, and nuclear power faces crippling safety regulations in large part because of public outcry in response to a perceived lack of acceptable safety standards. In both of these cases, the result was not just the creation of regulations, but strong buy-in and a souring of public opinion on a broad category of technologies.3

Although public opposition can be a powerful force in expelling things from the Overton window, it does not seem easy to predict or steer. The Chernobyl disaster made a strong case for designing reactors in a responsible way, but it was instead viewed by much of the public as a demonstration that nuclear power should be abolished entirely. I do not have a strong take on how hard this problem is in general, but I do think it is important and should be investigated further.

The scientific community

The precise boundaries of acceptable technology are defined in part by the scientific community, especially when technologies are very early in development. Policy makers and the public tend to defer to what they understand to be the official, legible scientific view when deciding what is or is not okay. This does not always match with actual views of scientists.

Geoengineering as an approach to reducing global warming has not been recommended by the IPCC, and a minority of climate scientists support research into geoengineering. Presumably the advocacy groups opposing geoengineering experiments would have faced a tougher battle if the official stance from the climate science community were in favor of geoengineering.

One interesting aspect of this is that scientific communities are small and heavily influenced by individual prestigious scientists. The taboo on geoengineering research was broken by the editor of a major climate journal, after which the number of papers on the topic increased by more than a factor of 20 after two years.4

Scientific papers published on solar radiation management by year. Paul Crutzen, an influential climate scientist, published a highly-cited paper on the use of aerosols to mitigate global warming in 2006. Oldham, et al 2014.

I suspect the public and policymakers are not always able to tell the difference between the official stance of regulatory bodies and the consensus of scientific communities. My impression is that scientific consensus is not in favor of radiation health models used by the Nuclear Regulatory Commission, but many people nonetheless believe that such models are sound science.

Warning shots

Past incidents like the Fukushima disaster and the Tuskegee syphilis study are frequently cited by opponents of nuclear power and human challenge trials. I think this may be significant, because it suggests that these “warning shots” have done a lot to shape perception of these technologies, even decades later. One interpretation of this is that, regardless of why someone is opposed to something, they benefit from citing memorable events when making their case. Another, non-competing interpretation is that these events are causally important in the trajectory of these technologies’ development and the public’s perception of them.

I’m not sure how to untangle the relative contribution of these effects, but either way, it suggests that such incidents are important for shaping and preserving norms around the deployment of technology.

Locality

In general, social norms are local. Building power plants is much more acceptable in France than it is in Italy. Even if two countries allow the construction of nuclear power plants and have similarly strong norms against breaking nuclear safety regulations, those safety regulations may be different enough to create a large difference in plant construction between countries, as seen with the US and France.

Because scientific communities have members and influence across international borders, they may have more sway over what happens globally (as we’ve seen with geoengineering), but this may be limited by local differences in the acceptability of going against scientific consensus.

Development trajectories

A common feature of these cases is that preventing or limiting deployment of the technology inhibited its development. Because less developed technologies are less useful and harder to trust, this seems to have helped reduce deployment.

Normally, things become cheaper to make as we make more of them in a somewhat predictable way. The cost goes down with the total amount that has been produced, following a power law. This is what has been happening with solar and wind power.5

Levelized cost of energy for wind and solar power, as a function of total capacity built. Levelized cost includes cost building, operating, and maintaining wind and solar farms. Bolinger 2022

Initially, building nuclear power plants seems to have become cheaper in the usual way for new technology—doubling the total capacity of nuclear power plants reduced the cost per kilowatt by a constant fraction. Starting around 1970, regulations and public opposition to building plants did more than increase construction costs in the near term. By reducing the number of plants built and inhibiting small-scale design experiments, it slowed the development of the technology, and correspondingly reduced the rate at which we learned to build plants cheaply and safely.6 Absent reductions in cost, they continue to be uncompetitive with other power generating technologies in many contexts.

Nuclear power in France and the US followed typical cost reduction curves until roughly 1970, after which they showed the opposite behavior. However, France showed a much more gradual increase. Lang 2017.

Because solar radiation management acts on a scale of months-to-years and the costs of global warming are not yet very high, I am not surprised that we have still not deployed it. But this does not explain the lack of research, and one of the reasons given for opposition to experiments is that it has not been shown to be safe. But the reason we lack evidence on safety is because research has been opposed, even at small scales.

It is less clear to me how much the relative lack of human challenge trials in the past7 has made us less able to do them well now. I’m also not sure how much a stronger past record of challenge trials would cause them to be viewed more positively. Still, absent evidence that medical research methodology does not improve in the usual way with quantity of research, I expect we are at least somewhat less effective at performing human challenge trials than we otherwise would be.

Separating safety decisions from gains of deployment

I think it’s impressive that regulatory bodies are able to prevent use of technology even when the cost of doing so is on the scale of many billions, plausibly trillions of dollars. One of the reasons this works seems to be that regulators will be blamed if they approve something and it goes poorly, but they will not receive much credit if things go well. Similarly, they will not be held accountable for failing to approve something good. This creates strong incentives for avoiding negative outcomes while creating little incentive to seek positive outcomes. I’m not sure if this asymmetry was deliberately built into the system or if it is a side effect of other incentive structures (e.g, at the level of politics, there is more benefit from placing blame than there is from giving credit), but it is a force to be reckoned with, especially in contexts where there is a strong social norm against disregarding the judgment of regulators.

Who stands to gain

It is hard to assess which actors are actually tempted by a technology. While society at large could benefit from building more nuclear power plants, much of the benefit would be dispersed as public health gains, and it is difficult for any particular actor to capture that value. Similarly, while many deaths could have been prevented if the covid vaccines had been available two months earlier, it is not clear if this value could have been captured by Pfizer or Moderna–demand for vaccines was not changing that quickly.

On the other hand, not all the benefits are external–switching from coal to nuclear power in the US could save tens of billions of dollars a year, and drug companies pay billions of dollars per year for trials. Some government institutions and officials have the stated goal of creating benefits like public health, in addition to economic and reputational stakes in outcomes like the quick deployment of vaccines during a pandemic. These institutions pay costs and make decisions on the basis of economic and health gains from technology (for example, subsidizing photovoltaics and obesity research), suggesting they have incentive to create that value.

Overall, I think this lack of clarity around incentives and capture of value is the biggest reason for doubt that these cases demonstrate strong resistance to technological temptation.

What this means for AI

How well these cases generalize to AI will depend on facts about AI that are not yet known. For example, if powerful AI requires large facilities and easily-trackable equipment, I think we can expect lessons from nuclear power to be more transferable than if it can be done at a smaller scale with commonly-available materials. Still, I think some of what we’ve seen in these cases will transfer to AI, either because of similarity with AI or because they reflect more general principles.

Social norms

The main thing I expect to generalize is the power of social norms to constrain technological development. While it is far from guaranteed to prevent irresponsible AI development, especially if building dangerous AI is not seen as a major transgression everywhere that AI is being developed, it does seem like the world is much safer if building AI in defiance of regulations is seen as similarly villainous to building nuclear reactors or infecting study participants without authorization. We are not at that point, but the public does seem prepared to support concrete limits on AI development.

Source

I do think there are reasons for pessimism about norms constraining AI. For geoengineering, the norms worked by tabooing a particular topic in a research community, but I’m not sure if this will work with a technology that is no longer in such an early stage. AI already has a large body of research and many people who have already invested their careers in it. For medical and nuclear technology, the norms are powerful because they enforce adherence to regulations, and those regulations define the constraints. But it can be hard to build regulations that create the right boundaries around technology, especially something as imprecise-defined as AI. If someone starts building a nuclear power plant in the US, it will become clear relatively early on that this is what they are doing, but a datacenter training an AI and a datacenter updating a search engine may be difficult to tell apart.

Another reason for pessimism is tolerance for failure. Past technologies have mostly carried risks that scaled with how much of the technology was built. For example, if you’re worried about nuclear waste, you probably think two power plants are about twice as bad as one. While risk from AI may turn out this way, it may be that a single powerful system poses a global risk. If this does turn out to be the case, then even if strong norms combine with strong regulation to achieve the same level of success as for nuclear power, it still will not be adequate.

Development gains from deployment

I’m very uncertain how much development of dangerous AI will be hindered by constraints on deployment. I think approximately all technologies face some limitations like this, in some cases very severe limitations, as we’ve seen with nuclear power. But we’re mainly interested in the gains to development toward dangerous systems, which may be possible to advance with little deployment. Adding to the uncertainty, there is ambiguity where the line is drawn between testing and deployment or whether allowing the deployment of verifiably safe systems will provide the gains needed to create dangerous systems.

Separating safety decisions from gains

I do not see any particular reason to think that asymmetric justice will operate differently with AI, but I am uncertain whether regulatory systems around AI, if created, will have such incentives. I think it is worth thinking about IRB-like models for AI safety.

Capture of value

It is obvious there are actors who believe they can capture substantial value from AI (for example Microsoft recently invested $10B in OpenAI), but I’m not sure how this will go as AI advances. By default, I expect the value created by AI to be more straightforwardly capturable than for nuclear power or geoengineering, but I’m not sure how it differs from drug development.

Social preview image: German anti-nuclear power protesters in 2012. Used under Creative Commons license from Bündnis 90/Die Grünen Baden-Württemberg Flickr

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 3:55 AM

The drug development comparison is an interesting one. I think a lot of policy people are operating on the comparison to nuclear, either nuclear weapons or nuclear power. Drug development is notably different because, like AI, it's a case where the thing we want to regulate is an R&D process, not just the eventual product.

A product requirement would be "don't publish an AI that will tell you to commit crimes." An R&D requirement would be "At no point during training should you have an AI that's good at telling you to commit crimes - it should always be either incompetent or safe." But this might be hard to formalize.

It's also hard to know how much we want more accurate measures of AI safety during a development process. We don't really care much about GPT telling people to commit crimes - we really care about future models that model the real world and take strategic actions in it. But even if we did have a good way of measuring those capabilities during training, would we want them written into regulation? Or should we have simpler and broader restrictions on what counts as good AI development practices?

Drug development is notably different because, like AI, it's a case where the thing we want to regulate is an R&D process, not just the eventual product

I agree, and I think I used "development" and "deployment" in this sort of vague way that didn't highlight this very well.

But even if we did have a good way of measuring those capabilities during training, would we want them written into regulation? Or should we have simpler and broader restrictions on what counts as good AI development practices?

I think one strength of some IRB-ish models of regulation is that you don't rely so heavily on a careful specification of the thing that's not allowed, because instead of meshing directly with all the other bureaucratic gears, it has a layer of human judgment in between. Of course, this does pass the problem to "can you have regulatory boards that know what to look for?", which has its own problems.

I believe that it is difficult to appreciate your other points when you claim things like:

Human challenge trials may have accelerated deployment of covid vaccines by more than a month, saving many thousands of lives and billions or trillions of dollars.

deploying covid vaccines a month sooner could have saved many thousands of lives

I think that it is reasonably unclear that CoViD vaccines have saved as many lives, given how wildly exaggerated the expected mortality figures that were used to made those claims were. Norman Fenton ("a mathematician and computer scientist specialized in Risk Assessment and Decision Analysis with Bayesian Networks") has talked and written about this and other issues for a while now: 

Even if we accept a degree of effectiveness from the vaccine, which I dispute, given issues of fading immunity and inverse protection over time (particularly among most vulnerable cohorts)

Effectiveness of mRNA vaccines and waning of protection against SARS-CoV-2  infection and severe covid-19 during predominant circulation of the delta  variant in Italy: retrospective cohort study | The BMJ

https://www.bmj.com/content/376/bmj-2021-069052

maybe by the promotion of IgG4 antibodies:

https://www.preprints.org/manuscript/202303.0441/v1

and who knows if by targeting Interferon Regulatory Factor 3:

https://www.frontiersin.org/articles/10.3389/fcimb.2021.789462/full

we must admit that there were other less risky and very well-known interventions that could have been used in this situation, if we didn't overreact (or accelerate) as much as we did:

Pharmaceuticals 16 00130 g002

Figure 2. Forest plot of the association of protective effect of vitamin D supplementation with intensive care unit admission in patients hospitalized with COVID-19. CI, confidence interval; OR, odds ratio [26,32,36,37,38].

https://www.mdpi.com/1424-8247/16/1/130

A double-blind, randomized clinical trial study was conducted on 128 critically ill patients infected with COVID-19 who were randomly assigned to the intervention (fortified formula with n3-PUFA) (n = 42) and control (n = 86) groups. Data on 1 month survival rate, blood glucose, sodium (Na), potassium (K), blood urea nitrogen (BUN), creatinine (Cr), albumin, hematocrit (HCT), calcium (Ca), phosphorus (P), mean arterial pressure (MAP), O2 saturation (O2sat), arterial pH, partial pressure of oxygen (PO2), partial pressure of carbon dioxide (PCO2), bicarbonate (HCO3), base excess (Be), white blood cells (WBCs), Glasgow Coma Scale (GCS), hemoglobin (Hb), platelet (Plt), and the partial thromboplastin time (PTT) were collected at baseline and after 14 days of the intervention.

The intervention group had significantly higher 1-month survival rate compared with the control group (21% vs 3%, P = 0.003). About 21% (n = 6) of the participants in the intervention group and only about 3% (n = 2) of the participants in the control group survived at least for 1 month after the beginning of the study.

The effect of omega-3 fatty acid supplementation on clinical and biochemical parameters of critically ill patients with COVID-19: a randomized clinical trial

https://translational-medicine.biomedcentral.com/articles/10.1186/s12967-021-02795-5

Of the 50 patients enrolled in the N-acetylglucosamine treatment group, 48 patients had follow-up data (50.0% [24/48] male; median age 63 years, range: 29–88). Multivariate analysis showed the treatment group had improved hospital length-of-stay (β: 4.27 [95% confidence interval (CI) −5.67; −2.85], p < 0.001), ICU admission (odds ratio [OR] 0.32 [95% CI 0.10; 0.96], p = 0.049), and poor clinical outcome (OR 0.30 [95% CI 0.09; 0.86], p = 0.034). Mortality was significantly lower for treatment versus control on univariate analysis (12.5% vs. 28.0%, respectively; p = 0.039) and approached significance on multivariate analysis (p = 0.081).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8282940/

And there are a number of doubts about the safety of these vaccines, which seem to create risks that never existed or that were remarkably uncommon among some population cohorts. We can see it, for example, in the risk of developing myocarditis:

Among 23 122 522 Nordic residents (81% vaccinated by study end; 50.2% female), 1077 incident myocarditis events and 1149 incident pericarditis events were identified. Within the 28-day period, for males and females 12 years or older combined who received a homologous schedule, the second dose was associated with higher risk of myocarditis, with adjusted IRRs of 1.75 (95% CI, 1.43-2.14) for BNT162b2 and 6.57 (95% CI, 4.64-9.28) for mRNA-1273. Among males 16 to 24 years of age, adjusted IRRs were 5.31 (95% CI, 3.68-7.68) for a second dose of BNT162b2 and 13.83 (95% CI, 8.08-23.68) for a second dose of mRNA-1273, and numbers of excess events were 5.55 (95% CI, 3.70-7.39) events per 100 000 vaccinees after the second dose of BNT162b2 and 18.39 (9.05-27.72) events per 100 000 vaccinees after the second dose of mRNA-1273. Estimates for pericarditis were similar.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9021987/

Since the Israeli vaccination program was initiated on 20 December 2020, the time-period matching of the control cohort was calculated backward from 15 December 2020. Nine post-COVID-19 patients developed myocarditis (0.0046%), and eleven patients were diagnosed with pericarditis (0.0056%). In the control cohort, 27 patients had myocarditis (0.0046%) and 52 had pericarditis (0.0088%). Age (adjusted hazard ratio [aHR] 0.96, 95% confidence interval [CI]; 0.93 to 1.00) and male sex (aHR 4.42; 95% CI, 1.64 to 11.96) were associated with myocarditis. Male sex (aHR 1.93; 95% CI 1.09 to 3.41) and peripheral vascular disease (aHR 4.20; 95% CI 1.50 to 11.72) were associated with pericarditis. Post COVID-19 infection was not associated with either myocarditis (aHR 1.08; 95% CI 0.45 to 2.56) or pericarditis (aHR 0.53; 95% CI 0.25 to 1.13). We did not observe an increased incidence of neither pericarditis nor myocarditis in adult patients recovering from COVID-19 infection.

https://www.mdpi.com/2077-0383/11/8/2219

But there are many more safety signals, you can find about them very quickly by following health publications:

In July 2021 the US Food and Drug Administration (FDA) quietly disclosed findings of a potential increase in four types of serious adverse events in elderly people who had had Pfizer’s covid-19 vaccine: acute myocardial infarction, disseminated intravascular coagulation, immune thrombocytopenia, and pulmonary embolism.1 Little detail was provided, such as the magnitude of the increased potential risk, and no press release or other alert was sent to doctors or the public. The FDA promised it would “share further updates and information with the public as they become available.”

Eighteen days later, the FDA published a study planning document (or protocol) outlining a follow-up epidemiological study intended to investigate the matter more thoroughly.2 This recondite technical document disclosed the unadjusted relative risk ratio estimates originally found for the four serious adverse events, which ranged from 42% to 91% increased risk. (Neither absolute risk increases nor confidence intervals were provided.) More than a year later, however, the status and results of the follow-up study are unknown. The agency has not published a press release, or notified doctors, or published the findings by preprint or the scientific literature or updated the vaccine’s product label.

https://www.bmj.com/content/379/bmj.o2527

I will, thus, claim that these vaccines represent a strong point against rushing out new treatments, that there were no reason to rush them out even more, and that a more conservative approach would have, in fact, prevented damage, at least, I believe, in terms of years of life lost, given that CoViD seems to very rarely pose a threat for young people, whereas these vaccines seem to cause more secondary effects precisely among young people.

Would someone care to explain to me what I did wrong to deserve getting my karma obliterated?