This is inspired by Eliezer’s “Death with Dignity” post. Simply put, AI Alignment has failed. Given the lack of Alignment technology AND a short timeline to AGI takeoff, chances of human survival have dropped to near 0%. This bleak outlook only considers one variable (the science) as a lever for human action. But just as a put option derives its value not merely from current prices and volatility, but also from time to expiration—so too do our odds of success. To hew more closely to the options metaphor, the expected value of human civilization hinges on our AGI timeline. Is there a way to buy more time?
A modest proposal: An FDA for Artificial Intelligence
The AI world has an instinctive revulsion to regulation. This is sensible. Committees of bloodless bureaucrats have an abysmal track record at achieving their stated goals. Eliezer has joked that if the government funded AI research, it might be a good thing because progress would stall. Let’s take that idea seriously.
What if we shifted efforts into regulating AI, domestically and globally? While regulation has a terrible track record at making consumers, investors, and workers better off, it excels at stalling growth. Scott has done yeoman’s work tracking the baleful impact of the FDA on pharmaceutical development and on health in general. I’m pretty skeptical of social scientists’ ability to measure regulation with stylized facts, but there is reason to think that the FDA effect generalizes.
I propose we enter the lobbyist business, with a goal toward erecting similar administrative hurdles to AI research. The benefits of such hurdles are obvious. They would throw a spanner into the works of AI companies around the globe. It is impossible to say ahead of time how much they would succeed at halting AI improvements. But the possibility that they might do so—perhaps significantly—should raise the chances of human survival at least higher than 0%.
In the possible worlds with intrusive regulatory agencies, how many make no progress toward AGI at all? I’d guess in at least 5% of possible worlds, a sufficiently arbitrary and bloated bureaucracy would halt AGI indefinitely. At this level of possibility, Eliezer is probably not even in “die with dignity” mode anymore. Heck, he may even be able to resume working ultra-hard, without breaking his promise to his previous self.
Objection 1: Regulators aren’t smart enough to impede progress
In a comment that Eliezer endorses, Vaniver raises an immediate objection. Namely:
the actual result will be companies need large compliance departments in order to develop AI systems, and those compliance departments won't be able to tell the difference between dangerous and non-dangerous AI.
This logic is exactly backwards. To put a fine point on it, Vaniver thinks that because compliance departments won’t be able to distinguish between dangerous and non-dangerous AI, they will err toward permitting dangerous AI.
This is not remotely how regulators or compliance departments think. They will err on the side of caution and are more likely to put the kibosh on all research than to allow dangerous research. At near 0% survival chances, this is easily a worthwhile tradeoff.
Regulation is robust to entropy
When you’re trying to regulate the pharma industry, entropy is your enemy. Word the bill slightly wrong, hire the wrong people, get a judge who assigns damages too aggressively, and you’ve accidentally made it illegal to sell insulin or research heart disease. When you’re writing building codes, make them a tad too stringent and you’ve accidentally made it illegal for poor people to live. Most ways in which regulation can go wrong stymies growth and productive activity. That’s why for most activities in most worlds, the unregulated world outperforms the regulated world.
But our goal is to throw a spanner in the works—the more spanners, the better. Since we want to hinder progress toward AGI, the more vague wording, bizarre giveaways to special interests, and arbitrary powers granted to bureaucrats and courts, the better. To an AI Safety researcher, the worry about AGI effecting “jobs” or inducing ethnic bias is a joke; it’s patently trivial compared to the literal extinction of humanity forever.
But in the world of regulation—it’s a gift! The more trivial concerns the better! If the hacks at the Artificial Intelligence Regulatory Agency want DeepMind to fill out a lot of forms about how their latest project will impact jobs, well, that’s time away from blowing up the world. The courts say OpenAI can be held liable for damages owing to discrimination caused by AI. That’s a day’s worth of committee meetings per week in which they won’t be creating a superintelligence.
This also makes the regulation easier to pass. Normally, drafters interested in regulation want to preserve as much productive potential in the industry as possible. When drafting environmental regulation, you have to strike a balance between protecting the water supply and letting people frack for natural gas. In political terms, you have to strike a balance between environmental and petroleum-industry lobbyists. But since we want to be as obstructive as possible, we can offer everything to everyone.
Politicians are afraid of AI’s impact on jobs? Done, put it in the bill. Democrats are worried that AI will discriminate against minorities? Excellent, added. Republicans are afraid AGI might promote non-binary gender identities? Quite right—can’t have that! (Ideally, we would make it illegal to discriminate against and to promote non-binary gender identities).
This also addresses another objection of Vaniver’s:
…if someone says "I work on AI safety, I make sure that people can't use language or image generation models to make child porn" or "I work on AI safety, I make sure that algorithms don't discriminate against underprivileged minorities", I both believe 1) they are trying to make these systems not do a thing broader society disapproves of, which is making 'AI' more 'safe', and 2) this is not attacking the core problem, and will not generate inroads to attacking the core problem.
These are concerns that an engineer actually working on AI Safety should have. But if our only goal is to throw barriers in the way of AI research, we want as vast and ill-defined categories’ as possible. Ideally, everything from “AGI’s impact on the flavor of chocolate” to “AGI’s effect on the wealth gap between East and West Timor” will fall under the purview of AI Safety compliance departments.
Objection 2: How is it nobody has thought about this?
I’m going to get a lot of people linking me to previous posts on LW. Some are quite good. But they are besides the point. Previous posts are about crafting intelligent regulation that will allow AI research to continue without endangering the world. That ship has sailed. Now we want the stupidest possible regulation so long as it gums up the works.
Also, previous posts are about, “if we were to regulate, how might it work?” That is not this post. This post is, should I actually set in motion a law to regulate AI?
Another objection: This is politically impossible
One relevant objection that Larks raises, is that the politics are unfeasible:
We don't want the 'us-vs-them' situation that has occurred with climate change, to happen here. AI researchers who are dismissive of safety law, regarding it as an imposition and encumbrance to be endured or evaded, will probably be harder to convince of the need to voluntarily be extra-safe - especially as the regulations may actually be totally ineffective.”
The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves. Given this, I actually think policy outreach to the general population is probably negative in expectation.
I agree that this is possible. But given how far under water our put options are, it seems at least worth trying. What if it turns out to be not that hard to convince governments to kneecap their research capabilities? Does that even seem farfetched?
FWIW, I think Larks is mistaken. An us-vs-them situation is perfectly compatible with our goal, since our goal is not to protect good AI research but to slow down all AI research. I also think Larks overestimates both the opposition of AI researchers, as well as their input in drafting legislation. (Does American regulation seem like it requires the consent of the best and brightest industry experts?)
Objection 4: What about China?
The reason why you can’t just convince everyone at DeepMind to stop working on AGI, is because barriers to entry are too low. If DeepMinders all went off into the wilderness to become hunter gatherers, someone else would just pick up the slack. Eliezer writes:
Even if DeepMind listened, and Anthropic knew, and they both backed off from destroying the world, that would just mean Facebook AI Research destroyed the world a year(?) later.
Some will argue that this holds at the country level. If the US regulates AI research to death, it will just mean that China blows up the world a few years later.
I have two responses to this.
One, a few years is a fantastic return on investment if we are really facing extinction.
Two, do not underestimate the institutional imperative: “…The behavior of peer companies, whether they are expanding, acquiring, setting executive compensation or whatever, will be mindlessly imitated.”
Governments work this way too. Governments like to regulate. When they see other governments pass sweeping regulatory measures, the institutional imperative pushes them to feel jealous and copy it. Empirically, I believe this is already the case: the Chinese regulatory framework is heavily influenced by copying America (and Japan).
Who should do this?
I don’t know that the people reading this forum are the best positioned to become lobbyists. For one, we still need people trying to solve the alignment problem. If you are working on that, the Ricardian dynamics almost definitely favor you continuing to do so.
However, I am a wordcel who barely knows how to use a computer. After discovering LessWrong, I instantly regretted not devoting my life to the Alignment problem, and regretted the fact that it was too late for me to learn the math necessary to make a difference. But—if the regulatory option seems sensible, I am willing to redirect my career to focus on it and to try to draft others with the right skills to do so as well.
If, however, you assure me that there are already brilliant lobbyists working day and night to do this, and it turns out that it’s harder than I thought, I will desist. Moreover, if there is something I’ve overlooked and this is a bad idea, I will desist.
But if it’s worth a try, I’ll try.
Is it?
Hi Aiyen, thanks for clarification.
(Warning: this response is long and much of it is covered by what Tamgen and others have said. )
The way I understand your fears, they fall into four main categories. In the order you raise them and, I think, in order of importance these concerns are as follows:
1) Regulations tend to cause harm to people, therefore we should not regulate AI.
I completely agree that a Federal AI Regulatory Commission will impose costs in the form of human suffering. This is inevitable, since Policy Debates Should Not Appear One Sided. Maybe in the world without the FAIRC, some AI Startup cures Alzheimer’s or even aging a good decade before AGI. In the world with FAIRC, we risk condemning all those people to dementia and decrepitude. This is quite similar to FDA unintended consequences.
Response:
You suggest that the OP was playing reference class tennis, but to me looking at the problem in terms of "regulators" and "harm" is the wrong reference class. They are categories that do not help us predict the answer to the one question we care about most: what is the impact on timelines to AGI?
If we zoom in closer to the object level, it becomes clear that the mechanism by which regulators harm the public is by impeding production. Using Paul Christiano’s rules of reference class tennis, “regulation impedes production” is a more probable narrative (i.e. supported by greater evidence, albeit not simpler) than simply “regulation always causes harm.” At the object level, we see this directly as when the FDA
shootsfines anyone with the temerity to produce cheaper EpiPens, the nuclear regulatory commission doesn't let anyone build nuclear reactors, etc. Or it can happen indirectly as a drag on innovation. To count the true cost of the FDA, we need to know how many wondrous medical breakthroughs we've already made on Earth prime.But if you truly believe that AGI represents an existential threat, and that at present innovation speeds AGI happens before Alignment, then AI progress (even when it solves Alzheimers) is on net a negative. The lives saved by Alzheimer’s have to be balanced against human extinction--and the balance leaves us way, way in the red. This means that all the regulatory failure modes you cite in your reply become net beneficial. We want to impede production.
By way of analogy, it would be as if Pfizer were nearly guaranteed to be working its way toward making a pill that would instantly wipe out humanity; or if Nuclear power actually was as dangerous as its detractors believe! Under such scenarios, the FDA is your best friend. Unfortunately, that is where we stand with AI.
To return to the key question: once it is clear that, at a mechanical level, the things that regulatory agencies do are to impede production, it also becomes clear that regulation is likely to lengthen AGI timelines.
2) The voting public is insufficiently knowledgeable about AI.
I'm not sure I understand the objection here. The government regulates tons of things that the electorate doesn't understand. In fact, ideally that is what regulatory agencies do. They say, "hey we are a democracy, but you, the demos, don't understand how education works so we need a department of education." This is often self-serving patronage, but the general point stands that the way regulatory agencies come into being in practice is not because the electorate achieves subject-area expertise. I can see a populist appeal for a Manhattan project to speed up AI in order to "beat China" (or whatever enemy du jour), but this is not the sort of thing that regulators in permanent bureaucracies do. (Just look at operation "warp speed"; quite apart from the irony in the name, the FDA and the CDC had to be dragged kicking and screaming to do it.)
3) Governments might use AI to do evil things
In your response you write:
I agree, of course, that these are all terrible evils wrought by governments. But I’m not sure what it has to do with regulation of AI. The historical incidents you cite would be relevant if the Holocaust were perpetrated by the German Bureau of Chemical Safety or if the Uighurs were imprisoned by the Chinese Ethnic Affairs Commission. Permanent regulatory bureaucracies are not and never have been responsible for (or even capable of) mission-driven atrocities. They do commit atrocities, but only by preventing access to useful goods (i.e. impeding production).
Finally, one sentence in this section sticks out and makes me think we are talking past each other. You write:
By my lights, this would be a WONDERFUL problem to have. An AI that was controllable by anyone (including Kim Jung-Un, Pol Pot, or Hitler) would, in my estimation, be preferable to a completely unaligned paper clip maximizer. Maybe we disagree here?
4) Liberal democracies are not magic, and we can't expect them to make the right decisions just because of our own political values.
I don't think my OP mentioned liberal democracy, but if I gave that impression then you are quite right I did so in error. You may be referring to my point about China. I did not mean to imply a normative superiority of American or any other democracy, and I regret the lack of clarity. My intent was to make a positive observation that governments do, in fact, mimic each other's regulatory growth. Robin Hanson makes a similar point; that governments copy each other largely because of institutional and informal status associations. This observation is neutral with regard to political system. If we announce a FAIRC, I predict that China will follow, and with due haste.