Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: fortyeridania 15 November 2017 02:18:41AM 0 points [-]


Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).


Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.

[Link] Artificial intelligence and the stability of markets

1 fortyeridania 15 November 2017 02:17AM
Comment author: IlyaShpitser 24 October 2017 09:51:55PM *  2 points [-]

"When it comes to needles to stick my new kiddo with, I'm not really being persuaded to do more than the intersection of vaccinations between similar nations."

You don't know enough to decide this. What is "similar" (climate, culture, disease spectrum?) Do you know the history of their immunization laws?

Seems to me you first decided this is an icky procedure, and it hurts your kid, and you feel protective. Then you went looking for reasons not to do it. Immunization has a free-rider aspect, because of herd immunity. So you may well get away with it, in terms of your kid's health, but "people like you" (defectors in PD) are a problem.

If you are an evil pharma-corp, vaccines are a terrible way to be evil.

C/D calculations in public health are real, but this is one of those things where the only way to be effective is not break the phalanx formation.

Comment author: fortyeridania 25 October 2017 02:16:34AM 0 points [-]

I agree with most of what you've said, but here's a quibble:

If you are an evil pharma-corp, vaccines are a terrible way to be evil.

Unless you're one of the sellers of vaccines, right?

Comment author: Dagon 17 October 2017 05:30:36PM *  0 points [-]

Yup, looks that way. LW 2.0 is running, but seems to have gone further toward the "publish thoughts, get some comments" and away from the conversational feel we had here.

So it goes.

Comment author: fortyeridania 18 October 2017 03:41:48AM 0 points [-]

That's too bad; it probably doesn't have to be that way. If you can articulate what infrastructural features of 1.0 are missing from 2.0, perhaps the folks at 2.0 can accommodate them in some way.

Economics of AI conference from NBER

1 fortyeridania 27 September 2017 01:45AM

The speaker list (including presenters and moderators) includes many prominent names in the economics world, including:

And others with whom you might be more familiar than I.

H/T Marginal Revolution

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: fortyeridania 18 September 2017 05:37:45AM 9 points [-]


Comment author: fortyeridania 18 September 2017 03:15:14AM 4 points [-]

[Link] Stanislav Petrov has died (2017-05-19)

8 fortyeridania 18 September 2017 03:13AM
Comment author: Erfeyah 15 September 2017 09:24:01PM *  2 points [-]

I was wondering if someone can point me to good LW's article(s)/refutation(s) of Searle's Chinese room argument and consciousness in general. A search comes up with a lot of articles mentioning it but I assume it is addressed in some form in the sequences?

Comment author: fortyeridania 18 September 2017 02:22:38AM 1 point [-]

I don't remember if the Sequences cover it. But if you haven't already, you might check out SEP's section on Replies to the Chinese Room Argument.

Comment author: ignoranceprior 11 September 2017 01:50:57AM 2 points [-]

According to this study, the law appears to be inaccurate for academic articles.

Comment author: fortyeridania 15 September 2017 07:23:27AM *  0 points [-]
  • Scholarly article

  • Title: Do scholars follow Betteridge’s Law?

  • Answer is no


View more: Next