On the 27th February, I, like many of us, became fully aware of the danger humanity was facing (let’s thank ‘Seeing the Smoke’) and put my cards on the table with this:

This is partly a test run of how we'd all feel and react during a genuine existential risk. Metaculus currently has it as a 19% chance of spreading to billions of people, a disaster that would certainly result in many millions of deaths, probably tens of millions. Not even a catastrophic risk, of course, but this is what it feels like to be facing down a 1/5 chance of a major global disaster in the next year. It is an opportunity to understand on a gut level that, this is possible, yes, real things exist which can do this to the world. And it does happen.
It's worth thinking that specific thought now because this particular epistemic situation, a 1/5 chance of a major catastrophe in the next year, will probably arise again over the coming decades. I can easily imagine staring down a similar probability of dangerously fast AGI takeoff, or a nuclear war, a few months in advance.

Well, now a few months have gone by and much has changed. The natural question to ask is- what general lessons have we learned, compared to that ‘particular epistemic situation’, now that we’re in a substantially different one? What does humanity’s response to the coronavirus pandemic so far imply about how we might fare against genuine X-risks?

At a first pass, the answer to that question seems obvious - not very well. The response of most usually well-functioning governments (I’m thinking mainly of Western Europe here) has been slow, held back by an unwillingness to commit all resources to a strategy and accept its trade-offs, and sluggish to respond to changing evidence. Advance preparation was even worse. This post gives a good summary of some of those more obvious lessons for X-risks, focussing specifically on slow AI takeoff.

As to what we ultimately blame for this slowness - Scott Alexander and Toby Ord gave as good an account as anyone (before the pandemic) in blaming a failure to understand expected value and the availability heuristic.

However, many of us predicted in advance the dynamics that would lead countries to put forward a slow and incoherent response to the coronavirus. What I want to explore now is - what has changed epistemically since I wrote that comment - what things have happened since that have surprised many of us who have internalised the truth of civilisational inadequacy? I am looking for generalised lessons we can take from this pandemic, rather than specific things we have learnt about the pandemic in the last few months. I believe there is one such lesson that is surprising, which I’d like to convince you of.

Underweighting Strong Reactions

My claim is that in late February/early March, many of us did overlook or underweight the possibility that many countries would eventually react strongly to coronavirus - with measures like lockdowns that successfully drove R under 1 for extended periods, or with individual action that holds R near 1 in the absence of any real government intervention. This meant we placed too much weight on coronavirus going uncontained, and were surprised when in many countries it did not.

Whether the strong reaction is fully or only partially effective remains to be seen, but the fact that this reaction occurred was surprising to many of us, relative to what we believed at the start of all this - I know that it surprised me.

I will first present the examples of predictions, some from people on LessWrong or adjacent groups and some from government scientists, which all either foretold worse outcomes by now, more feeble results from interventions, lower compliance with interventions, that interventions wouldn’t even be implemented or predicted bad outcomes that are not yet ruled out but now look much less likely than they did.

I will then put forward an explanation for these mistakes - something I named (in April) the ‘Morituri Nolumus Mori’ ('We who are about to die don't want to') effect, in reference to the Discworld novel The Last Hero: that most governments and individuals have a consistent, short-term aversion to danger which is stronger than many of us suspected, though not sustainable in the absence of an imminent threat. I’ll first go through the incorrect predictions and then give my favoured explanation. If I am correct that many of us (and also many scientists and policymakers) missed the importance of the MNM effect, it should increase our confidence that, in situations where there is some warning, there are fairly basic features of our psychology and institutions that do get in the way of the very worst outcomes. However, the MNM effect is limited and will not help in any situation where advance planning or responding to things above and beyond immediate incentives are required.

I consider the MNM effect to be mostly compatible with Zvi’s ‘Governments Most Places Are Lying Liars With No Ability To Plan or Physically Reason.’ (I do think that claim is too America-centric, and 'no ability to plan/reason' is hyperbole if applied to Europe or even the UK, let alone e.g. Taiwan). The MNM effect is what we credit instead of clever planning or reasoning, for why things aren’t as bad as they could be - the differences between e.g. America and Germany are due to any level of planning at all, not better planning.

Noticing Confusion

Things (especially in the US) are sufficiently bad right now that it is difficult to remember that many of us put significant weight on things already being worse than they currently are - but as I will show that was the case.

Some people’s initial predictions were that R would not be driven substantially below 1 for any extended period, anywhere, except with a Wuhan style lockdown. Robin Hanson seemingly claimed this on March 19: ‘So even if you see China policy as a success, you shouldn’t have high hopes if your government merely copies a few surface features of China policy.‘ In that article Hanson was clearly referring to ‘most governments’ that aren’t China as being unlikely to suppress without adopting a deep mimicry of China’s policy - including welding people into their flats and forcible case isolation. Yet, two months later there are many countries, from New Zealand to Germany, which have simply copied some but not all features of the Chinese policy while achieving initial suppression.

More recently, Hanson updated to speaking more specifically about the USA: (in response to a graphic showing several examples of successful suppression in Europe and Asia) ‘Yes, you know that other nations have at times won wars. Even so, you must decide if to choose peace or war.’ Going from ‘most western countries’ to ‘America’ counts as an optimistic update.

But mitigation measures (which Hanson calls ‘peace’) have also worked out less disastrously than our worst fears suggested because of stronger-than-expected individual action. See e.g. this article about Sweden:

Ultimately, Sweden shows that some of the worst fears about uncontrolled spread may have been overblown, because people will act themselves to stop it. But, equally, it shows that criticisms of lockdowns tend to ignore that the real counterfactual would not be business as usual, nor a rapid attainment of herd immunity, but a slow, brutal, and uncontrolled spread of the disease throughout the population, killing many people. Judging from serological data and deaths so far, it is the speed of deaths that people who warned in favour of lockdowns got wrong, not the scale.

This remark about Sweden is applicable more generally - the worst case scenario for almost every country seems to be R around 1.5 at this point - see this map from Epidemic Forecasting. True explosive spread is very rare across the world, but was being discussed as a real possibility in early March even in Europe. Again, the response is not good enough to outright reverse the unfolding disaster, but it is still strong enough to arrest explosive spread.

Focussing on the UK, which had a badly delayed response and a highly imperfect lockdown, we can see that even there R was driven substantially below 1 and hospital admissions with Covid-19 (which are the most reliable short-term proxy for infection throughout the overall pandemic) are at 13% of their peak. London did not exceed its ICU capacity despite predictions that it would from government modellers.

Another way of getting at this disjoint is to just look at the numbers and see if we still expect the same number of people to die. Wei Dai initially (1st March) predicted 190-760 million people would eventually die from coronavirus with 50% of the world infected. The more recent top-rated comment by Orthonormal points out that current evidence points against that. Good Judgment rates the probability that more than 80 million will die as 1%. A recent paper by Imperial College suggested that the Europe-wide lockdowns have so far saved 3 million lives without accounting for the fact that deaths in an unmitigated scenario would have been higher due to a lack of intensive care beds. Regardless of what happens next, would we have predicted that in early March?

These mistakes have not been limited to the LessWrong community - one of the reasons for the aforementioned delay before the UK called the lockdown was that UK behavioural scientists advising the government were near certain that stringent lockdown measures would not be obeyed to the necessary degree and lockdowns in the rest of Europe were instead implemented ‘more for solidarity reasons’. In the end it turned out that compliance was instead ‘higher than expected’. The attitude in most of Europe in early March was that full lockdowns were completely infeasible. Then they were implemented.

Another way of getting at this observation is to note the people who have publicly recorded their surprise or shift in belief as these events have unfolded. I have written several comments with earlier versions of this claim, starting two months ago. Wei Dai notably updated in the direction of thinking coronavirus would reach a smaller fraction of the population, after reading this prescient blogpost:

The interventions of enforced social distancing and contract tracing are expensive and inevitably entail a curtailment of personal freedom. However, they are achievable by any sufficiently motivated population. An increase in transmission *will* eventually lead to containment measures being ramped up, because every modern population will take draconian measures rather than allowing a health care meltdown. In this sense COVID-19 infections are not and will probably never be a full-fledged pandemic, with unrestricted infection throughout the world. It is unlikely to be allowed to ever get to high numbers again in China for example. It will always instead be a series of local epidemics.

In a recent podcast, Rob Wiblin and Tara Kirk Sell were discussing what they had recently changed their minds about. They picked out the same thing:

Robert Wiblin: Has the response affected your views on what policies are necessary or should be prioritized for next time?
Tara Kirk Sell: The fact that “Stay-at-home orders” are actually possible in the US and seem to work… I had not really had a lot of faith in that before and I feel like I’ve been surprised. But I don’t want “Stay-at-home orders” to be the way we deal with pandemics in the future. Like great, it worked, but I don’t want to do this again.

Or this from Zvi:

5. Fewer than 3 million US coronavirus deaths: 90%
I held. Again, we saw very good news early, so to get to 3 million now we’d need full system collapse to happen quickly. It’s definitely still possible, but I’m guessing we’re now more like 95% to avoid this than 90%.

Lastly, we have the news from the current hardest-hit places, like Manhattan, which have already hit partial herd immunity and show every sign of being able to contain coronavirus going forward even with imperfect measures.

The Morituri Nolumus Mori effect

Many of these facts (in particular the reason that 100 million plus dead is effectively ruled out) have multiple explanations. For one, the earliest data on coronavirus implied the hospitalization rate was 10-20% for all age groups, and we now know it is substantially lower (that tweet by an author of the Imperial College paper, which estimated a hospitalization rate of 4.4%). This means that if hospitals were entirely unable to cope with the number of patients, the IFR would be in the range of 2%, not 20% initially implied.

However, the rest of our information about the characteristics of the virus in early March- the estimate of R0 and ‘standard’ IFR, were fairly close to the mark. Our predictions were working off of reasonable data about the virus. Any prediction made then about the number of people who would be infected isn’t affected by this hospitalization rates confounder, nor is any prediction about what measures would be implemented. So there must be some other reason for these mistakes - and a common thread among nearly all the inaccurate pessimistic predictions was that they underestimated the forcefulness, though not the level of forethought or planning, behind mitigation or suppression measures. As it is written,

"Brains don't work that way. They don't suddenly supercharge when the stakes go up - or when they do, it's within hard limits. I couldn't calculate the thousandth digit of pi if someone's life depended on it."

The Morituri Nolumus Mori effect, as a reminder, is the thesis that governments and individuals have a consistent, short-term reaction to danger which is stronger than many of us suspected, though not sustainable in the absence of an imminent threat. This effect is just such a hard limit - it can’t do very much except work as a stronger than expected brake. And something like it has been proposed as an explanation, not just by me two months ago but by Will MacAskill and Toby Ord, for why we have already avoided the worst disasters. Here’s Toby’s recent interview:

Learning the right lessons will involve not just identifying and patching our vulnerabilities, but pointing towards strengths we didn’t know we had. The unprecedented measures governments have taken in response to the pandemic, and the public support for doing so, should make us more confident that when the stakes are high we can take decisive action to protect ourselves and our most vulnerable. And when faced with truly global problems, we are able to come together as individuals and nations, in ways we might not have thought possible. This isn’t about being self-congratulatory, or ignoring our mistakes, but in seeing the glimmers of hope in this hardship.

Will MacAskill made reference to the MNM effect in a pre-coronavirus interview, explaining why he puts the probability of X-risks relatively low.

Second then, is just thinking in terms of the rational choice of the main actors. So what’s the willingness to pay from the perspective of the United States to reduce a single percentage point of human extinction whereby that just means the United States has three hundred million people. How much do they want to not die? So assume the United States don’t care about the future. They don’t care about people in other countries at all. Well, it’s still many trillions of dollars is the willingness to pay just to reduce one percentage point of existential risk. And so you’ve got to think that something’s gone wildly wrong, where people are making such incredibly irrational decisions.

Bill Gates also referred to this effect.

I also think that the MNM effect is the main reason why both Metaculus and superforecasters consistently predicted deaths will stay below 10 million, implying a very slow burn, neither suppression nor full herd immunity, right across most of the world.

The Control System

From Slatestarcodex:

Is there a possibility where R0 is exactly 1? Seems unlikely – one is a pretty specific number. On the other hand, it’s been weirdly close to one in the US, and worldwide, for the past month or two. You could imagine an unfortunate control system, where every time the case count goes down, people stop worrying and go out and have fun, and every time the case count goes up, people freak out and stay indoors, and overall the new case count always hovers at the same rate. I’ve never heard of this happening, but this is a novel situation.

One more speculative consequence of the MNM effect is that a reactive, strong push against uncontrolled pandemic spread is a good explanation for why Rt tends to approach 1 in countries without a coordinated government response, like the United States, and the more coordinated the response the further below 1 Rt can be pushed. A priori, we might expect that there is some ‘minimal default level’ of response that leads to Rt being decreased from R0, 3-4, to some much lower value - but why is the barometer set around 1? It’s not a coincidence, as Zvi points out.

Whenever something lands almost exactly on the only inflection point, in this case R0 of one where the rate of cases neither increases nor decreases, the right reaction is suspicion.
In this case, the explanation is that a control system is in play. People are paying tons of attention to when things are ‘getting better’ or ‘getting worse’ and adjusting behaviour, both legally required actions and voluntary actions.

The MNM effect is apparently so predictable that, with short-ish term feedback, it can form a control system. The other end of this control system is all the usual cognitive and institutional biases that prevent us from taking these events seriously and actually planning for them.

It is possible this is the first time such a control system has formed to mitigate a widespread disaster. Disasters of this size are rare throughout history. Add to this the fact that such control systems can only form when the threat unfolds and changes over several months, giving people time to veer between incaution and caution. Meanwhile, the short term feedback which governments and people can access about the progress of the epidemic is relatively new - better data collection and mass media make modern populations much more sensitive to the current level of threat than those throughout history. Remembering that noone knows exactly where or when the Spanish Flu began highlights that good real-time monitoring of a pandemic is an extremely new thing.

In our current situation of equilibrium created by a control system, the remaining uncertainties are: can we do better than the equilibrium position? (sociological and political) and how bad is the equilibrium position? (mainly a matter of the disease dynamics). It seems to me, the equilibrium probably ends in partial herd immunity (nowhere near 75% 'full herd immunity', because of MNM). This involves healthcare systems struggling to cope to some extent along the way. The US is essentially bound for equilibrium - but what that entails is not clear. I could imagine the equilibrium holding Rt near 1 even in the absence of any government foresight or planning but it doesn’t seem very likely, as some commenters pointed out. More likely it ends with partial herd immunity.

However, there is still a push away from this equilibrium in Europe (e.g. attempts to use national-level tracing and testing programs). This push is not that strong and depends on individuals sticking to social distancing rules. European lockdowns brought Rt down to between 0.6 and 0.8, noticeably below 1, indicating that they beat the equilibrium to some degree for a while. Rt got down to 0.4 in Wuhan, suggesting great success in beating the equilibrium.

That is the other lesson - any level of government foresight or planning adds on to the already existing MNM effect - witness how foot traffic levels dramatically declined before lockdowns were instituted, or even if they were never instituted, right across the world. The effects are additive. So if the default holds Rt near 1, then a few extra actions by a government able to look some degree into the future can make all the difference.

Conclusions

I consider that the number of predictions that have already been falsified or rendered unlikely is sufficient to establish that the MNM effect exists, or is stronger than many of us thought early on (I don’t imagine there were many people who would have denied the MNM effect exists at all, i.e. expected us to just walk willingly to our deaths). ‘Dumb reopening' as is happening the US, as a successor to lockdowns that have pushed R to almost exactly 1, is consistent with what I have claimed - that our reliable and predictable short-term reactivity (governmental and individual) and desire to not die, the Morituri Nolumus Mori effect, serves as a brake against the very worst outcomes. What next?

Conceivably, the control system could keep running, and R could stay near 1 perpetually even with no effective planning or well-enforced lockdowns, or there could be a slow grind as the virus spreads up to a partial herd immunity threshold - either way, the MNM effect is there, screening off some outcomes that looked likely in early March, such as a single sharp peak. Similarly, the MNM effect gives a helping hand to attempts at real strategy. Some governments that are competent in the face of massive threats but slow to react (such as Germany) did better than expected because of the caution of citizens who started restricting their movements before lockdown and who now aren’t taking full advantage of reopened public spaces.

From the perspective of predicting future X-risks, the overall outcome of this pandemic is less interesting than the fact that there has been a consistent, unanticipated push from reactive actions against the spread of the virus. Then there is a further, also relevant issue of whether countries can beat the equilibrium (of R being held at near 1 or just above 1) and do better than the MNM effect mandates. So far, Europe spent a while beating equilibrium (with R during lockdown at 0.6-0.8) and China drove R down even further.

The first remaining uncertainty is: can a specific country/the world as a whole do better than this equilibrium position? We do have some pertinent evidence to answer this in the form of the superforecaster predictions and, though it is confounded by the next uncertainty, from disease modelling. The insights of disease modelling should shed light on the question: how bad is this equilibrium position? If we knew this we would have a better sense of what the reasonable worst case scenario is for coronavirus, but that is not important from an x-risk perspective.

This makes it clear what kinds of evidence are worth looking out for. We should look at the performance of areas of the world where there is little advance planning, but nevertheless the people are informed about the level of day-to-day danger and leaders don’t actively oppose individual efforts at safety. Parts of the United States fit the bill. Seeing the eventual outcomes in these areas, when compared to some initial predictions about just how bad things could get, will give us an idea of the extra help provided by the MNM effect. Then, with that as our baseline, we can see how many countries do better to judge the further help provided by planning or an actual strategy.

Implications for X-risks

The most basic lesson that should be learned from this disaster is, of course, that for the moment we are inadequate - unable to coordinate as long as there is any uncertainty about what to do, and unable to meaningfully plan in advance for plausible near-term threats like those from pandemics. We should of course remember that not enough focus is put on long-term risks, that our institutions are flawed in dealing with them.

Covid-19 shows that there can still be a strong reaction once it is clear there is disaster coming. We have some idea already just how strong this reaction is. We have less idea how effective it will end up being. In February and March, we often observed a kind of pluralistic ignorance, where even experts raising the alarm did so in a way that was muted and seemingly aimed at ‘not causing panic’.

Robert Wiblin: I think part of what was going on was perhaps people wanted to promote this idea of “Don’t panic” because they were worried that the public would panic and they felt that the way to do that was really to talk down the risk a lot and then it kind of got a bit out of control, but I’m not sure how big the risk of… It seems like what’s ended up happening is much worse than the public panicking in January. Or maybe I just haven’t seen what happens when the public really panics. I guess people panicked later and it wasn’t that bad.

Suppose this dynamic applies in a future disaster. We might expect to see a sudden phase change from indifference to panic despite the fact that trouble was already looming anyway and no new information has appeared.

If there is enough forewarning before the disaster occurs that a phase shift in attitudes can take place, we will react hard. Suppose the R0 of Coronavirus had been 1.5-2, and the rest of our response had been otherwise the same - suppression measures taken in the US and elsewhere would have worked perfectly even though we were sleepwalking towards disaster as recently as three weeks before. The only reason this didn’t happen is because of contingent facts about this particular virus. On the other hand, there are magnitudes of disaster which the MNM effect is clearly inadequate for - suppose the R0 had been 8.

Perhaps the MNM effect is stronger for a disaster, like a pandemic, for which there is some degree of historical memory and evolved emotions and intuitions around things like purity and disgust which can take over and influence our risk-mitigation behaviour. Maybe technological disasters that don’t have the same deep evolutionary routes, like nuclear war, or X-risks like unaligned AGI that have literally never happened before, would not evoke the same strong, consistent reaction because the threat is even less comprehensible.

Nevertheless, one could imagine a slow AI takeoff scenario with a lot of the same characteristics as coronavirus, where the MNM effect steps in at the last moment:

It takes place over a couple of years. Every day there are slight increases in some relevant warning sign. A group of safety people raise the alarm but are mostly ignored. There are smaller scale disasters in the run-up, but people don’t learn their lesson (analogous to SARS-1 and MERS). Major news orgs and government announce there is nothing to worry about (analogous to initial statements about masks and travel bans). Then there is a sudden change in attitudes for no obvious reason. At some point everyone freaks out - bans and restrictions on AI development, right before the crisis hits. Or, possibly, right when it is already too late.

The lesson to be learned is that there may be a phase shift in the level of danger posed by certain X-risks - if the amount of advance warning or the speed of the unfolding disaster is above some minimal threshold, even if that threshold would seem like far too little time to do anything given our previous inadequacy, then there is still a chance for the MNM effect to take over and avert the worst outcome. In other words, AI takeoff with a small amount of forewarning might go a lot better than a scenario where there is no forewarning, even if past performance suggests we would do nothing useful with that forewarning.

More speculatively, I think we can see the MNM effect’s influence in other settings where we have consistently avoided the very worst outcomes despite systematic inadequacy - Anders Sandberg referenced something like it when he was discussing the probability of nuclear war. There have been many near misses when nuclear war could have started, implying that we can’t have been lucky over and over. Instead that there has been a stronger skew towards interventions that halt disaster at the last moment, compared to not-the-last-moment:

Robert Wiblin: So just to be clear, you’re saying there’s a lot of near misses, but that hasn’t updated you very much in favor of thinking that the risk is very high. That’s the reverse of what I expected.
Anders Sandberg: Yeah.
Robert Wiblin: Explain the reasoning there.
Anders Sandberg: So imagine a world that has a lot of nuclear warheads. So if there is a nuclear war, it’s guaranteed to wipe out humanity, and then you compare that to a world where is a few warheads. So if there’s a nuclear war, the risk is relatively small. Now in the first dangerous world, you would have a very strong deflection. Even getting close to the state of nuclear war would be strongly disfavored because most histories close to nuclear war end up with no observers left at all.
In the second one, you get the much weaker effect, and now over time you can plot when the near misses happen and the number of nuclear warheads, and you actually see that they don’t behave as strongly as you would think. If there was a very strong anthropic effect you would expect very few near misses during the height of the Cold War, and in fact you see roughly the opposite. So this is weirdly reassuring. In some sense the Petrov incident implies that we are slightly safer about nuclear war.

On the other hand, the MNM effect requires leaders and individuals to have access to information about the state of the world right now (i.e. how dangerous are things at the moment). Even in countries with reasonably free flow of information this is not a given. If you accept Eliezer Yudkowksy’s thesis that clickbait has impaired our ability to understand a persistent, objective external world then you might be more pessimistic about the MNM effect going forward. Perhaps for this reason, we should expect countries with higher social trust, and therefore more ability for individuals to agree on a consensus reality and understand the level of danger posed, to perform better. Japan and the countries in Northern Europe like Denmark and Sweden come to mind, and all of them have performed better than the mitigation measures employed by their governments would suggest.

The principle that I’ve called the Morituri Nolumus Mori effect is defined in terms of the map, not the territory - a place where our predictions diverged from reality in an easily and consistently describable way - that the short-term reaction from many governments and individuals was stronger than we expected, whilst advance planning and reasoning was as weak as we expected. The MNM effect may also be a feature of the territory. It may already have a name in the field of social psychology, or several names. It may be a contingent artefact of lots of local facts about only our coronavirus response, though I don’t think that’s plausible for the reasons given above. Either way, I believe that it was an important missing piece, probably the biggest missing piece, in our early predictions and needs to be considered further if we want to refine our analysis of X-risks going forward. One of the few upsides to this catastrophe is that it has provided us with a small-scale test run of some dynamics that might play out during a genuine catastrophic or existential risk, and we should be sure to exploit that for all its worth.

New Comment
11 comments, sorted by Click to highlight new comments since:

Thank you for writing this post and tracking down everyone's stated beliefs and updates!

I fear MNM only operated in this case because the prosocial intervention of isolating yourself also happened to be a very selfishly effective intervention. In my view, what this community failed to predict is simply that other people would, with some delay, come to the same conclusions and act as this community did, i.e. going into some degree of isolation to protect themselves. It's a pretty embarrassing failure! I distinctly recall expecting that aggregate behavior won't change much until the local epidemic was visibly out of control, filling up hospitals and so on, whereas I of course was going to wisely ride all this out from my apartment.

This would explain why there was no MNM operating in most governments in Jan-Feb. It would also mean we can't rely on MNM helping out with future risks that have a different structure than pandemics.

Nice post! I admit I myself underestimated the ferocity of the public lockdowns in March, and totally didn't predict the R0=1 control system phenomenon. So I'm convinced.

I'd love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?

There's a theory which I endorse which goes something like "Change only happens in a crisis. The leaders and the people flail around and grab whatever policy solutions happen to be lying around in prestigious places, and implement them. So, doing academic policy work can be surprisingly impactful; even if no one listens to you now, they might when it really matters."

[-]dxu30

I'd love to see more thought about how the MNM effect might look in an AI scenario. Like you said, maybe denials and assurances followed by freakouts and bans. But maybe we could predict what sorts of events would trigger the shift?

I take it you're presuming slow takeoff in this paragraph, right?

Well, if the takeoff is sufficiently fast, by the time people freak out it will be too late. The question is, how slow does the takeoff need to be, for the MNM effect to kick in at some not-useless point? And what other factors does it depend on, besides speed? It would be great to have a better understanding of this.

Some factors that seem important for whether or not you get the MNM effect - rate of increase of the danger (sudden, not gradual), intuitive understanding of the danger, level of social trust and agreement over facts, historical memory of the disaster, how certain the threat is, coordination problems, how dangerous the threat is, how tractable the problem seems

I agree people often underestimate policy and behavioural responses to disaster. I called this "sleepwalk bias" - the tacit assumption that people will sleepwalk into disaster to a greater extent than is plausible.

Jon Elster talks about "the younger sibling syndrome":

A French philosopher, Maurice Merleau-Ponty, said that our spontaneous tendency is to view other people as ‘‘younger siblings.’’ We do not easily impute to others the same capacity for deliberation and reflection that introspection tells us that we possess ourselves, nor for that matter our inner turmoil, doubts, and anguishes. The idea of viewing others as being just as strategic and calculating as we are ourselves does not seem to come naturally.

From reading your post - the sleepwalk bias does seem to be the mirror-image of the Morituri Nolumus Mori effect; that we tend to systematically underweight strong, late reactions. One difference is that I was thinking of both individual and policy responses whilst your post focusses on policy, but that's in large part because most of the low-frequency high-damage risks we commonly talk[ed] about are X-risks that can be dealt with only at the level of policy. I also note that I got at a few of the same factors as you that might affect the strength of such a reaction:

The catastrophe is arriving too fast for actors to react.
It is unclear whether the catastrophe will in fact occur, or it is at least not very observable for the relevant actors (the financial crisis, possibly AGI).
The possible disaster, though observable in some sense, is not sufficiently salient (especially to voters) to override more immediate concerns (climate change).
There are conflicts (World War I) and/or free-riding problems (climate change) which are hard to overcome.
The problem is technically harder than initially thought.

The speed issue I discussed in conclusions and I obliquely referred to the salience issue in talking about 'ability to understand consensus reality' and that we have pre-existing instincts around purity and disgust that would help a response to something like a pandemic. The presence of free-rider problems I didn't discuss. How the speed/level of difficulty interacts with the response I did mention - talking about the hypotheticals where R0 was 2 or 8, for example.

Those differences aside, it seems like we got at the same phenomenon independently.

I'm curious about whether you made any advance predictions about likely outcomes based on your understanding of the 'sleepwalk bias'. I made a light suggestion that things might go better than expected in mid-March, but I can't really call it a prediction. The first time I explicitly said 'we were wrong' was when a lot of evidence had already come in - in April.

An economist friend said in a discussion about sleepwalk bias 9 March:

In the case of COVID, this led me to think that there will not be that much mortality in most rich countries, but only due to drastic measures.

The rest of the discussion may also be of interest; e.g. note his comment that "in economics, I think we often err on the other side -- people fully incorporate the future in many models."

So, two months have gone by. My main conclusions look mostly unchanged, except that I wasn't expecting such a monotonically stable control system effect in the US. Vaccine news looks better than I expected, superforecasters are optimistic. The major issue in countries with moderate to good state capacity is preventing a winter second wave and managing small infection spikes. Rob Wiblin seems to buy in to the MNM effect.

Whatever happened to the Hospitalization Rate?

Many of these facts (in particular the reason that 100 million plus dead is effectively ruled out) have multiple explanations. For one, the earliest data on coronavirus implied the hospitalization rate was 10-20% for all age groups, and we now know it is substantially lower (that tweet by an author of the Imperial College paper, which estimated a hospitalization rate of 4.4%). This means that if hospitals were entirely unable to cope with the number of patients, the IFR would be in the range of 2%, not 20% initially implied.

Back in a previous Age of The Earth, also known as early March 2020, the most important thing in the world was to figure out the coronavirus hospitalization rate, and we overestimated it. See e.g.

Suppose 50% of the UK (33 million people) get the virus of which 5% (~ 1.8 million people) will need serious hospitalization [conservative estimate].

It's mostly of academic interest now, since (at least in Europe) genuine exponential spread is looking more and more like the exception rather than the rule, but considering how much time we spent discussing this issue I'd like to know the final answer for completeness’ sake. It looks like even 'conservative' estimates of the hospitalization rates were too high by a factor of at least 2, just as claimed by the author of that imperial paper.

Here's a crude estimate: the latest UK serology survey says 6.2% of people were infected by July 26th. Another says 7.1% were infected by July 30. The level of infection is so low in the UK right now that you'll only get movement by a few tenths of a percentage point over the couple of weeks between then and now.

The false negative rate is unclear but I've heard claims as high as a third, so the real number may be as high as 9.3% based on the overall infection survey. Covid19pro estimated that on July 26th 8.6% (13.3-5.1%) had been infected. That 8.6% number seems to correspond to a reasonable false negative rate on the antibody tests (28% if you believe the first study, ~17% if you believe the second survey).

In other words, the median estimates from covid19pro look reasonably consistent with the antibody tests, implying a false negative rate of about 15-30%, so I'm just going to assume they're roughly accurate.

We know from the ONS that the total number of patients ever admitted to hospital with coronavirus on July 22nd was 131,412. That number is probably pretty close to accurate - even during the worst of the epidemic the UK was testing more or less every hospital patient with coronavirus symptoms. The estimated number of people ever infected on July 22nd by c19pro was 5751036

So, 131412/5751036 = 2.3% hospitalization rate

Update after 5 weeks: The R_T graph for the US displays a clear oscillation around R_t =1, with the current value reaching 1 for the third time and declining, suggesting one complete cycle of the control system.

The big issue for the MNM effect isn't that it won't exist or not cause a strong response, but rather the issue that plagues pause-type policies will apply 100-fold to the MNM effect:

The issue is that algorithmic advances could make the policy quite useless fast, and in the MNM effect, there will plausibly be only 1 or 2 OOMs left to superintelligent AI through primarily scaling.

People have debated on LW about the value of algorithmic advances and whether they are necessary for AGI and ASI, but the debate is a bit irrelevant for pause/ban purposes, because there definitely have been useful algorithmic progress before, and under a pause scenario, there will be a lot of incentives to get more use out of algorithmic progress like AI search, so I consider the policies generated by the MNM effect to only last several years at most, and also we will almost certainly have super-persuasive AIs that blunt the strength of any laws passed on AI, so I don't expect the MNM effect to be helpful at all, and plausibly see it as a very harmful effect for AI safety.

Link below:

https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d