Short version: In a saner world, AI labs would have to purchase some sort of "apocalypse insurance", with premiums dependent on their behavior in ways that make reckless behavior monetarily infeasible. I don't expect the Earth to implement such a policy, but it seems worth saying the correct answer aloud anyway.
Background
Is advocating for AI shutdown contrary to libertarianism? Is advocating for AI shutdown like arguing for markets that are free except when I'm personally uncomfortable about the solution?
Consider the old adage "your right to swing your fists ends where my nose begins". Does a libertarian who wishes not to be punched, need to add an asterisk to their libertarianism, because they sometimes wish to restrict their neighbor's ability to swing their fists?
Not necessarily! There are many theoretical methods available to the staunch libertarian who wants to avoid getting punched in the face, that don't require large state governments. For instance: they might believe in private security and arbitration.
This sort of thing can get messy in practice, though. Suppose that your neighbor sets up a factory that's producing quite a lot of lead dust that threatens your child's health. Now are you supposed to infringe upon their right to run a factory? Are you hiring mercenaries to shut down the factory by force, and then more mercenaries to overcome their counter-mercenaries?
A staunch libertarian can come to many different answers to this question. A common one is: "internalize the externalities".[1] Your neighbor shouldn't be able to fill your air with a bunch of lead dust unless they can pay appropriately for the damages.
(And, if the damages are in fact extraordinarily high, and you manage to bill them appropriately, then this will probably serve as a remarkably good incentive for finding some other metal to work with, or some way to contain the spread of the lead dust. Greed is a powerful force, when harnessed.)
Now, there are plenty of questions about how to determine the size of the damages, and how to make sure that people pay the bills for the damages they cause. There are solutions that sound more state-like, and solutions that sound more like private social contracts and private enforcement. And I think it's worth considering that there are lots of costs that aren't worth billing for, because the cost of the infrastructure to bill for them isn't worth the bureaucracy and the chilling effect.
But we can hopefully all agree that noticing some big externality and wanting it internalized is not in contradiction with a general libertarian worldview.
Liability insurance
Limited liability is a risk subsidy. Liability insurance would align incentives better.
In a saner world, we'd bill people when they cause a huge negative externality (such as an oil spill), and use that money to reverse the damages.
But what if someone causes more damage than they have money? Then society at large gets injured.
To prevent this, we have insurance. Roughly, a hundred people each of whom have a 1% risk of causing damage 10x greater than their ability to pay, can all agree (in advance) to pool their money towards the unlucky few among them, thereby allowing the broad class to take risks that none could afford individually (to the benefit of all; trade is a positive-sum game, etc.).
In a sane world, we wouldn't let our neighbors take substantive risks with our lives or property (in ways they aren't equipped to pay for), for the same reason that we don't let them steal. Letting someone take massive risks, where they reap the gains (if successful) and we pay the penalties (if not), is just theft with extra steps, and society should treat it as such. The freedom and fairness of the markets depends on it just as much as the freedom and fairness of the markets depends on preventing theft.
Which, again, is not to say that a state is required in theory—maybe libertarians would prefer a world in which lots of people sign onto a broad "trade fairly and don't steal" social contract, and this contract is considered table-stakes for trades among civilized people. In which case, my point is that this social contract should probably include clauses saying that people are liable for the damage they cause, and that the same enforcement mechanisms that crack down on thieves also crack down on people imposing risks (on others) that they lack the funds and/or insurance to cover.
Now, preventing people from "imposing risks" unless they "have enough money or insurance to cover the damages" is in some sense fundamentally harder than preventing simple material theft, because theft is relatively easier to detect, and risk analysis is hard. But theoretically, ensuring that everyone has liability insurance is an important part of maintaining a free market, if you don't want to massively subsidize huge risks to your life, liberty, and property.
Apocalypse insurance
Hopefully by now the relevance of these points to existential risk is clear. AI companies are taking extreme risks with our lives, liberty, and property (and those of all potential future people), by developing AI while having no idea what they're doing. (Please stop.)
And in a sane world, society would be noticing this—perhaps by way of large highly-liquid real-money prediction markets—and demanding that the AI companies pay out "apocalypse insurance" in accordance with that risk (using whatever social coordination mechanisms they have available).
When I've recently made this claim in-person, people regularly objected: but insurance doesn't pay out until the event happens! What's the point of demanding that Alice has liability insurance that pays out in the event Alice destroys the world? Any insurance company should be happy to sell that insurance to Alice for very cheap, because they know that they'll never have to pay out (on account of being dead in the case where Alice kills everyone).
The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.
(And before you object "not me!", observe that civilization happily flies airplanes over your head, which have some risk of crashing and killing you—and a staunch libertarian might say you should bill civilization for that risk, in some very small amount proportional to the risk that you take on, so as to incentivize civilization to build safer airplanes and offset the risk.)
The guiding principle here is that trade is positive-sum. When you think you can make a lot of money by risking my life (e.g., by flying planes over my house), and I don't want my life risked, there's an opportunity for mutually beneficial trade. If the risk is small enough and the amount of money is big enough then you can give me a cut of the money, such that I prefer the money to the absence-of-risk, and you still have a lot of money left over. Everyone's better off.
This is the relationship that society "should" have with AI developers (and all technologists that risk the lives and livelihoods of others), according to uncompromising libertarian free-market ideals, as far as I can tell.
With the caveat that the risk is not small, and that the AI developers are risking the lives of everyone to a very significant degree, and that's expensive.
In short: apocalypse insurance differs from liability insurance in that it should be paid out to each and every citizen (that developers put at risk) immediately, seen as a trade in exchange for risking their life and livelihood.
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".
Caveats
In a sane world, the exact calculations required for apocalypse insurance to work seem fairly subtle to me. To name a few considerations:
- An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture.
- Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).
- And it's also complicated by the question of whether we're comfortable letting AI companies loan against all the value their AI could create, versus letting them loan against the sliver of that value that comes counterfactually from them (given that some other group might come along a little later that's a little safer and offer the same gains).
- There are big questions about how to assess the risk (and of course the value of the promised-future-stars depends heavily on the risk).
- There are big questions about whether future people (who won't get to exist if life on earth gets wiped out) are relevant stakeholders here, and how to bill people-who-risk-the-world on their behalf.
And I'm not trying to flesh out a full scheme here. I don't think Earth quite has the sort of logistical capacity to do anything like this.
My point, rather, is something like: These people are risking our lives; there is an externality they have not internalized; attempting to bill them for it is entirely reasonable regardless of your ideology (and in particular, it fits into a libertarian ideology without any asterisks).
Why so statist?
And yet, for all this, I advocate for a global coordinated shutdown of AI, with that shutdown enforced by states, until we can figure out what we're doing and/or upgrade humans to the point that they can do the job properly.
This is, however, not to be confused with preferring government intervention, as my ideal outcome.
Nor is it to be confused with expecting it to work, given the ambitious actions required to hit the brakes, and given the many ways such actions might go wrong.
Rather, I spent years doing technical research in part because I don't expect government intervention to work here. That research hasn’t panned out, and little progress has been made by the field at large; so I turn to governments as a last resort, because governments are the tools we have.
I'd prefer a world cognizant enough of the risk to be telling AI companies that they need to either pay their apocalypse insurance or shut down, via some non-coercive coordinated mechanism (e.g. related to some basic background trade agreements that cover "no stealing" and "cover your liabilities", on pain not of violence but of being unable to trade with civilized people). The premiums would go like their risk of destroying the world times the size of the cosmic endowment, and they'd be allowed to loan against their success. Maybe the insurance actuaries and I wouldn't see exactly eye-to-eye, but at least in a world where 93% of the people working on the problem say there's a 10+% chance of it destroying a large fraction of the future’s value, this non-coercive policy would do its job.
In real life, I doubt we can pull that off (though I endorse steps in that direction!). Earth doesn't have that kind of coordination machinery. It has states. And so I expect we'll need some sort of inter-state alliance, which is the sort of thing that has ever actually worked on Earth before (e.g. in the case of nukes), and which hooks into Earth's existing coordination machinery.
But it still seems worth saying the principled solution aloud, even if it's not attainable to us.
- ^
A related observation here is that the proper libertarian free-market way to think of your neighbor's punches is not to speak of forcibly stopping him using a private security company, but to think of charging him for the privilege. My neighbors are welcome to punch me, if they're willing to pay my cheerful price for it! Trade can be positive-sum! And if they're not willing to pony up the cash, then punching me is theft, and should be treated with whatever other mechanisms we're imagining that enforce the freedom of the market.
I am sorry to say that on a forum where many people are likely to have been raised in a socio-cultural environnement where libertarian ideas are deeply rooted. My voice will sound dissonant here and I call to your open-mindedness.
I think that there are strong limitations to such ideas as developed in the OP proposal. Insurance is mutualization of risk, it's a statistic approach relying on the possibility to assess a risk. It works for risks happening frequently, with a clear typology, like car accidents, tempest, etc. Even in these cases there is always an insurance ceiling. But risks that are exceptionnal and the most hazardous, like war damages, nuclear accident etc, cannot be insured and are systematically subject to contractual exclusions. There is no apocalypse insurance because the risk cannot be assessed by actuaries. Even if you create such an insurance, it would be artificial, non rationally assessed, with an insurance ceiling making it useless. There is even the risk that it gives the illusion that everything is ok and acceptable. The insurance mechanism does not encourages responsability, but a contrario irresponsability. On top of that compensation through money is a legal fiction. But in real life money isn't everything that's worth. In the most dramatic cases the real damage is never repaired (i.e. loss of your child, loss of your legs, loss of your own life), it's more a symbolic compensation, "better than nothing".
As a matter of fact, I have professionnal knowledge of law and insurance, from inside, and I have a very practical experience of what I am saying. Libertarianism encourages an approach that is very theoretical and economics-centered, and that's honestly interesting, but it is also somehow disconnected from reality. Just one ordinary example among others. A negligent fourniture mover destroyed family goods inherited from generations, not a word of excuses because he said "there are insurances for that". In the end, after many months of procedure and inenumerable time and energy spent by the victim, the professional's insurance paid almost nothing because of course old family goods have no economical value for experts. Well, when you see how insurance effectively works in real cases, and how it can often encourages negligent and irresponsible behavior, it is very difficult to be enthousiast at the idea that AI existential hazard could be managed by the subscription of an insurance policy.