Status: An early 'hot take' at low-probability catastrophic risks. While I don't think this should be a priority for research, I'd like to engage more with folks in the substance addiction chemistry community to better understand the risks.

Contention: Highly lethal addictive synthetic psychostimulants are incredibly destructive and their production should be considered a highly dangerous (+100k deaths) risk from advanced AI

Chemical weapons attacks are massively destructive. Take chemical weapons attacks over the last decade in Syria. According to the Syrian Network for Human Rights, “217 chemical weapons attacks carried out by the Syrian regime resulted in the deaths of 1,514 individuals”, including 1,413 civilians, plus an extra 11,080 individuals. The capacity to make large scale chemical weapons cheaply and effectively using public foundation models is something that researchers are right to be worried about.

However, weapons aren’t the only synthetically generated chemical compounds we should be worried about. I’m thinking about what is almost certainly the biggest killer of Americans between the ages of 18 and 49: fentanyl, or synthetic opioids. 

In 2021 alone, opioids killed upwards of 80,000 people. The death counts from opioids like fentanyl (and increasingly related nitazene compounds) are alarming:

  • Every 14 months or so America loses more people to fentanyl than it has lost in all of its wars combined since the second world war, from Korea to Afghanistan (Economist). Every 14 months!!
  • Some 6m Americans are addicted to opioids, and around four in ten say they know someone who has died of a drug overdose. (Economist)
  • In 2021 a total of +100k people died from drug overdoses. The Center for Disease Control and Prevention noted that "Drug overdose deaths involving psychostimulants such as methamphetamine are increasing with and without synthetic opioid involvement", suggesting that this general problem isn't going away any time soon (Source)

As you would expect, then, the costs are tremendous (the following is according to a study from 2017: by all accounts things seem to have gotten worse since then):

  • In 2017, more than 2.1 million people over age 12 had an opioid use disorder, and over 47,000 opioid overdose deaths occurred.
  • The value of life lost due to overdose deaths was $480.7 billion.
  • Almost $35 billion was spent on health care and opioid use disorder treatment.
  • Healthcare costs were $31.3 billion
  • Opioid use disorder treatment was $3.5 billion
  • Criminal justice spending accounted for $14.8 billion.

"It has become abundantly clear that the opioid epidemic is not only a health crisis, but also an economic and national security one,” Congressman David Trone (D-MD) says, here. But the crisis actually a technological one. We’re too good at making really hard to detect dangerous drugs. 

The reason fentanyl is so popular is because it is potent in small doses and neither hard nor particularly expensive to manufacture. As the Economist notes: “Law-enforcement officers could disrupt supply by burning fields of the stuff or dropping pesticides on coca farms. Fentanyl is different. Synthetic drugs can be manufactured by one person in a basement or a tiny flat. That makes finding and destroying such makeshift labs difficult. The drug’s small size and potency also make it much easier to transport.” As an earlier article concluded: “despite all the attention paid to the disadvantaged and the despairing, the core problem is at once simpler and more depressing: fentanyl is just too easy to get”. 

Publicly available foundation models capable of accelerating people’s ability to create nitazenes or opioids (or similar highly addictive, highly lethal psychostimulants) threaten to make this problem a whole lot worse. In the first case, it might make it easier to create drugs that are harder to trace, or more concentrated allowing them to be shipped in lower doses (and harder for them to be deployed in sub-fatal amounts). In the second, it might make it easier for parties to create new drugs quickly, making it harder to track and illegalise them. These drugs are likely to be manufactured outside the US (China, India and Mexico are main sites of labs, adding to the complex geopolitical implications), where it may be the case that models are unregulated and that laws are slow to catch up and illegalise or persecute gangs responsible. Most fentanyl is then carried through legal ports of entry by Americans, where it will probably be consumed by people who don’t even know that they’re taking a synth drug.

Downstream, an ai-powered opioid-style epidemic would have drastic consequences. Drug deaths polarise populations, often along racial lines (early on in the epidemic, the Economist reports, white and Native American people were dying at a much higher rate than other racial groups; things evened out over time). According to the “deaths of despair” hypothesis, advanced by Anne Case and Angus Deaton of Princeton University, they also tend to afflict vulnerable financial groups. The Economist cites a 2013 study by Justin Pierce of the Federal Reserve and Peter Schott of Yale University arguing that counties exposed to import competition from China after 2000 had higher unemployment rates and more overdose deaths. These are precisely the populations that may be at risk of Ai-powered redundancy. After this, the term ‘epidemics’ really hits home. Opioids are highly addictive and highly contagious: that is, you’re a lot more likely to start using if you know someone who uses. This means that an AI-powered synthdrug epidemic would not only create and reinforces racial and class divisions, but place massive strains on health infrastructure like hospitals, federal budgets, and political attention.

I’m going to do a lot more work on this crisis over the next few months, but for now I have a few recommendations:

  • Models that are capable of enhancing the production of synthetic psychostimulants should be screened for with the intensity of chemo-biological weapons, starting with studies by organisations like OpenAI and Rand
  • Policy legislation should follow suit, prosecuting model developers who release models with these capabilities just as harshly as with chemobiological weapons
  • Since models outside the US may still be used to create them, safe psychostimulants that wean people off the destructive effects of opioids should be a research priority to proactively address this threat 

What do you think? Feel free to drop a comment down below.
 

New Comment
2 comments, sorted by Click to highlight new comments since:

I don't think the general idea is wrong. And it's easy to generalize (to, for instance, engineering of new viruses)

Lootboxes, clickbait, sexualization, sugar, drugs, etc. are superstimuli, and they form maximums, which means that you can't really compete with them or create better alternatives which are healthier.

Since AIs optimize, they're likely to discover these dangerous maximums. And if there's one defense against Moloch, it's a lack of information. Atomic weapons are only dangerous because we can make them, and lootboxes are only harming gaming because we know these strategies of exploitation.

It's likely that AIs can find drugs which feel so good that people will destroy themselves just for a second dose. Something much more addictive than anything which exists currently. Outside of drugs too, AIs can find extremely effective strategies with terrible consequences, and both AIs and humans tend towards the most effective strategies, even if everyone loses in the process.

We have fought against dishonesty and deception for 1000s of years, and warned against alcohol, gambling and hedonism, and used strict social norms to guard against their dangers. Now we're discovering much worse things, and at the same time relaxing our social norms, leading to degeneracy and weak-willed people who can't resist dangerous temptations (and as we will soon see, religious people had a point about the dangers of indulgence).

You convince me of outcome, but not of comparative capacity:

  1. Drug addictivity has upper limit - the % of people who take it once that become addicted to it, and the % of people who successfully quit. It caps at 100% and 0% respectively. Fentanyl probably isn't too far off that cap.
  2. Without AI, more addictive opioids than fentanyl will probably be discovered at some point. How much higher is the capacity for creating addictiveness?