I don't think the general idea is wrong. And it's easy to generalize (to, for instance, engineering of new viruses)
Lootboxes, clickbait, sexualization, sugar, drugs, etc. are superstimuli, and they form maximums, which means that you can't really compete with them or create better alternatives which are healthier.
Since AIs optimize, they're likely to discover these dangerous maximums. And if there's one defense against Moloch, it's a lack of information. Atomic weapons are only dangerous because we can make them, and lootboxes are only harming gaming because we know these strategies of exploitation.
It's likely that AIs can find drugs which feel so good that people will destroy themselves just for a second dose. Something much more addictive than anything which exists currently. Outside of drugs too, AIs can find extremely effective strategies with terrible consequences, and both AIs and humans tend towards the most effective strategies, even if everyone loses in the process.
We have fought against dishonesty and deception for 1000s of years, and warned against alcohol, gambling and hedonism, and used strict social norms to guard against their dangers. Now we're discovering much worse things, and at the same time relaxing our social norms, leading to degeneracy and weak-willed people who can't resist dangerous temptations (and as we will soon see, religious people had a point about the dangers of indulgence).
You convince me of outcome, but not of comparative capacity:
A friend asked me what I might submit for this essay competition. This is my go at an answer (I don't have time to write a full essay!):
Highly lethal addictive synthetic psychostimulants are incredibly destructive and their production should be considered a highly dangerous (+100k deaths) risk from advanced AI
Chemical weapons attacks are massively destructive. Take chemical weapons attacks over the last decade in Syria. According to the Syrian Network for Human Rights, “217 chemical weapons attacks carried out by the Syrian regime resulted in the deaths of 1,514 individuals”, including 1,413 civilians, plus an extra 11,080 individuals. The capacity to make large scale chemical weapons cheaply and effectively using public foundation models is something that researchers are right to be worried about.
However, weapons aren’t the only synthetically generated chemical compounds we should be worried about. I’m thinking about what is almost certainly the biggest killer of Americans between the ages of 18 and 49: fentanyl, or synthetic opioids.
In 2021 alone, opioids killed upwards of 80,000 people. The death counts from opioids like fentanyl (and increasingly related nitazene compounds) are alarming:
As you would expect, then, the costs are tremendous (the following is according to a study from 2017: by all accounts things seem to have gotten worse since then):
"It has become abundantly clear that the opioid epidemic is not only a health crisis, but also an economic and national security one,” Congressman David Trone (D-MD) says, here. But the crisis actually a technological one. We’re too good at making really hard to detect dangerous drugs.
The reason fentanyl is so popular is because it is potent in small doses and neither hard nor particularly expensive to manufacture. As the Economist notes: “Law-enforcement officers could disrupt supply by burning fields of the stuff or dropping pesticides on coca farms. Fentanyl is different. Synthetic drugs can be manufactured by one person in a basement or a tiny flat. That makes finding and destroying such makeshift labs difficult. The drug’s small size and potency also make it much easier to transport.” As an earlier article concluded: “despite all the attention paid to the disadvantaged and the despairing, the core problem is at once simpler and more depressing: fentanyl is just too easy to get”.
Publicly available foundation models capable of accelerating people’s ability to create nitazenes or opioids (or similar highly addictive, highly lethal psychostimulants) threaten to make this problem a whole lot worse. In the first case, it might make it easier to create drugs that are harder to trace, or more concentrated allowing them to be shipped in lower doses (and harder for them to be deployed in sub-fatal amounts). In the second, it might make it easier for parties to create new drugs quickly, making it harder to track and illegalise them. These drugs are likely to be manufactured outside the US (China, India and Mexico are main sites of labs, adding to the complex geopolitical implications), where it may be the case that models are unregulated and that laws are slow to catch up and illegalise or persecute gangs responsible. Most fentanyl is then carried through legal ports of entry by Americans, where it will probably be consumed by people who don’t even know that they’re taking a synth drug.
Downstream, an ai-powered opioid-style epidemic would have drastic consequences. Drug deaths polarise populations, often along racial lines (early on in the epidemic, the Economist reports, white and Native American people were dying at a much higher rate than other racial groups; things evened out over time). According to the “deaths of despair” hypothesis, advanced by Anne Case and Angus Deaton of Princeton University, they also tend to afflict vulnerable financial groups. The Economist cites a 2013 study by Justin Pierce of the Federal Reserve and Peter Schott of Yale University arguing that counties exposed to import competition from China after 2000 had higher unemployment rates and more overdose deaths. These are precisely the populations that may be at risk of Ai-powered redundancy. After this, the term ‘epidemics’ really hits home. Opioids are highly addictive and highly contagious: that is, you’re a lot more likely to start using if you know someone who uses. This means that an AI-powered synthdrug epidemic would not only create and reinforces racial and class divisions, but place massive strains on health infrastructure like hospitals, federal budgets, and political attention.
I’m going to do a lot more work on this crisis over the next few months, but for now I have a few recommendations:
What do you think? Feel free to drop a comment down below.