ChristianKl comments on Risk Contracts: A Crackpot Idea to Save the World - Less Wrong

-2 Post author: SquirrelInHell 30 September 2016 02:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread.

Comment author: ChristianKl 30 September 2016 10:42:42PM 3 points [-]

For all those reasons Nassim Taleb wrote about, it's a bad idea to treat risk like it can be that precisely measured.

Comment author: SquirrelInHell 01 October 2016 08:24:00PM -1 points [-]

Yes, but to implement risk budgets it's enough to know upper bounds with reasonable certainty. It is possible to implement verifiable upper bounds, esp. in tech contexts such as AI.

Comment author: ChristianKl 01 October 2016 08:37:03PM *  1 point [-]

It is possible to implement verifiable upper bounds

Why do you think this happens to be the case?

The upper bound is nearly always that there a black swan reason that makes you destroy the world.

Comment author: SquirrelInHell 01 October 2016 08:49:36PM *  0 points [-]

It is my impression that there are at least some examples in which this is done in practice: as far as I know, in rocket design you do in fact calculate those for most components, including software used on the on-board computers. This information is used to e.g. decide on the amount of duplication of electronics components in critical systems of the rocket. I am, however, not an expert on rockets.

It seems plausible that at least in some concepts, we can indeed build safeguards that have a certain efficiency that we know at reducing our overall risk. Even if this is true only sometimes, than it would be useful to have a way to calculate the maximum allowed risk levels for extinction-like events.

Incidentally, I am also of the opinion that having any kind of calculation would work better than making a non-zero extinction risk taboo, or not subject to negotiation (which seems to be the case currently).

However of course, I am not claiming that my idea is so great. I stand behind my opinion that we need some such system to make sensible tradeoffs on "emissions" of existential risk.

The upper bound is nearly always that there a black swan reason that makes you destroy the world.

Ah, I see you added this part.

I generally agree. Still, sometimes you'll want something to guide your design even if you know that there might be some such black swan. You are surely not suggesting that existence of black swans is enough to make us abandon all effort and do whatever.

Comment author: ChristianKl 01 October 2016 09:01:45PM 2 points [-]

It is my impression that there are at least some examples in which this is done in practice: as far as I know, in rocket design you do in fact calculate those for most components, including software used on the on-board computers. This information is used to e.g. decide on the amount of duplication of electronics components in critical systems of the rocket. I am, however, not an expert on rockets.

Of course it's possible to do risk calculations. At the same time that doesn't mean that you are safe. Long-Term Capital Management exploded despite having low "verified upper bound" risk in the sense you speak about risk.

Incidentally, I am also of the opinion that having any kind of calculation would work better than making a non-zero extinction risk taboo, or not subject to negotiation (which seems to be the case currently).

Calculation of risk often leads to people taking more risk because they believe that the models of the risk they have accurately describe the risk.

Comment author: Gunnar_Zarncke 02 October 2016 08:24:13AM 0 points [-]

Long-Term Capital Management exploded despite having low "verified upper bound" risk in the sense you speak about risk.

But it might be that some of these banks had a blind spot there. If there were outside estimates that carry part of the risk then it might have looked different. Insurers have reinsurance for that. And I think a risk market might improve on that.

Comment author: ChristianKl 02 October 2016 04:19:03PM *  1 point [-]

But it might be that some of these banks had a blind spot there.

Every model has blind spots. That's the nature of models. If you price risk by a specific model, people take less risk in your model and often take more risk that's not part of the model.

It's a systematic issue and if you want to get deeper into it read Antifragile or The Black Swan.

If you launch rockets, than it might be okay to assume that your risk model is good enough to optimize for it. If you are on the other hand talking about risk from UFAI there's no reason to assume that you understand the problem well enough to model it and there a good chance that you take less risk in your model but increase the chance of the Black Swan event that kills you.

Comment author: Gunnar_Zarncke 03 October 2016 12:03:31AM 1 point [-]

I'm quite aware of Black Swans. My suggestion was that some actors might kow about unknown unknowns and be able to make at least some predictions about this. Surely not inside systems that have opposing incentives. But e.g. reinsurer have some need to hedge these. These principles might be built upon. Maybe markets today price in black swans to some degree already.

Comment author: ChristianKl 03 October 2016 01:52:21PM *  1 point [-]

By the definition of unknown unknowns, they aren't known.

Long-Term Capital Management did hedge their risk with their "Noble prize"-winning formulas.

Comment author: Gunnar_Zarncke 03 October 2016 02:19:05PM 0 points [-]

Math. Can sometimes surprisingly say something about the unknown.

Social effects. Long-Term Capital Management maybe didn't want to see the limits of their approach.