ChristianKl comments on Risk Contracts: A Crackpot Idea to Save the World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
For all those reasons Nassim Taleb wrote about, it's a bad idea to treat risk like it can be that precisely measured.
Yes, but to implement risk budgets it's enough to know upper bounds with reasonable certainty. It is possible to implement verifiable upper bounds, esp. in tech contexts such as AI.
Why do you think this happens to be the case?
The upper bound is nearly always that there a black swan reason that makes you destroy the world.
It is my impression that there are at least some examples in which this is done in practice: as far as I know, in rocket design you do in fact calculate those for most components, including software used on the on-board computers. This information is used to e.g. decide on the amount of duplication of electronics components in critical systems of the rocket. I am, however, not an expert on rockets.
It seems plausible that at least in some concepts, we can indeed build safeguards that have a certain efficiency that we know at reducing our overall risk. Even if this is true only sometimes, than it would be useful to have a way to calculate the maximum allowed risk levels for extinction-like events.
Incidentally, I am also of the opinion that having any kind of calculation would work better than making a non-zero extinction risk taboo, or not subject to negotiation (which seems to be the case currently).
However of course, I am not claiming that my idea is so great. I stand behind my opinion that we need some such system to make sensible tradeoffs on "emissions" of existential risk.
Ah, I see you added this part.
I generally agree. Still, sometimes you'll want something to guide your design even if you know that there might be some such black swan. You are surely not suggesting that existence of black swans is enough to make us abandon all effort and do whatever.
Of course it's possible to do risk calculations. At the same time that doesn't mean that you are safe. Long-Term Capital Management exploded despite having low "verified upper bound" risk in the sense you speak about risk.
Calculation of risk often leads to people taking more risk because they believe that the models of the risk they have accurately describe the risk.
But it might be that some of these banks had a blind spot there. If there were outside estimates that carry part of the risk then it might have looked different. Insurers have reinsurance for that. And I think a risk market might improve on that.
Every model has blind spots. That's the nature of models. If you price risk by a specific model, people take less risk in your model and often take more risk that's not part of the model.
It's a systematic issue and if you want to get deeper into it read Antifragile or The Black Swan.
If you launch rockets, than it might be okay to assume that your risk model is good enough to optimize for it. If you are on the other hand talking about risk from UFAI there's no reason to assume that you understand the problem well enough to model it and there a good chance that you take less risk in your model but increase the chance of the Black Swan event that kills you.
I'm quite aware of Black Swans. My suggestion was that some actors might kow about unknown unknowns and be able to make at least some predictions about this. Surely not inside systems that have opposing incentives. But e.g. reinsurer have some need to hedge these. These principles might be built upon. Maybe markets today price in black swans to some degree already.
By the definition of unknown unknowns, they aren't known.
Long-Term Capital Management did hedge their risk with their "Noble prize"-winning formulas.
Math. Can sometimes surprisingly say something about the unknown.
Social effects. Long-Term Capital Management maybe didn't want to see the limits of their approach.