For instance, any competent epidemiologist at the CDC or WHO can give you fairly precise odds of when the next global pandemic will occur with a mortality of 30% to 50% of the population. No expert in this area voice any doubt that such an outbreak will occur. It is not a question of if, but of when.
... what
ETA: Wikipedia yields nothing except an equivocal "there is concern". The only panic I see was about H5N1, and that turned out to be way overblown.
"What" indeed!
I'm an epidemiologist (and hopefully a competent one), and I agree that we are not anywhere close to adequately prepared for a bad pandemic (where H1N1 was a not-so-bad pandemic, and H5N1 would probably be a bad pandemic). However, I've participated in several foresight and pandemic preparedness exercises that tried to put odds on pandemics with various profiles (mortality, infectiousness, etc.), and I have never observed a consensus anywhere close to this strong.
If anyone could direct me to a publication, report, group, or anything that supports the claims quoted in the parent and/or give the reasoning that lead to it, then I am in need of a massive update and would like to know immediately!
No expert in this area voice any doubt that such an outbreak will occur.
This is not only false, but epistemically absurd.
Could you elaborate on the distribution of opinions you observe at these exercises? Do the opinions get written up? Is MD's opinion within the 90th percentile of pessimism?
From our internal report (IM me for additional details)
Risk of a severe natural pandemic:
[1] There were several things I thought were sub-optimal about this process, not least of which was the confusion between frequentist and Bayesian probabilities; the latter were what was really being elicited, but expressed as the former. This led to confusion when we were asked to estimate the probability of events with no historical precedent.
Thanks.
If you don't believe that the world is rapidly changing, then 1/55 years seems fairly summarized by "when it happens."
"10 000 domestic fatalities" and "severe disruption of social services" seem like a weird pair. The latter sounds much more severe than the former. Of course, a disease that infects many but kills few could accomplish both.
If this is the most severe thing you put odds on, it's quite far from the 1% fatality rate, let alone the 20-30% MD talks about; he is probably way beyond the 90th percentile of pessimism. Is it possible that he is mixing up infection with mortality?
"10 000 domestic fatalities" and "severe disruption of social services" seem like a weird pair. The latter sounds much more severe than the former.
See “9/11, immediate fatalities” and “9/11, consequences” for comparison. I can easily see huge indirect impact due to panic and the like.
[edited to add:] Also: I wouldn’t be surprised to see a graph for the distribution of “disease fatalities” that has most of the mass around 10k, perhaps with a long, low tail, but the graph of “social disruption vs. fatalities” rising very sharply before 10k, but then growing only slowly.
Yes, in a catastrophe localized to a city 10k fatalities pairs sensibly with "severe disruption of social services," but we're talking about a pandemic.
10k fatalities, 100k gravely ill but not dying, the media confuses the two, politicians try to push lower numbers, real numbers are discovered and as a result much higher numbers are extrapolated, folk without the disease but similar or imagined symptoms overwhelm the hospitals, large scale quarantine (appropriate or not) tie up qualified personnel, actual or imaginary paucity of vaccines causes a few riots...
There’s lots of stuff that can get out of proportion. And anyway, “severe disruption of social services” is kind of vague. I mean, it sounds bad, but that might be misleading. For instance, the phrase as given does not say “country-wide”.
My sense from a lot of epidemiologists is that this does not seem inevitable, particularly sans bioterrorism or biowarfare, before technology renders it impossible. The claim is that there will be a plague killing a higher percentage than the Black Death in Europe, despite modern nutrition, sanitation, etc, and an order of magnitude worse than the 1918-1919 flu. H5N1 flu has had case-mortality rates in diagnosed cases that match those numbers, but more people were found with antibodies than were diagnosed, suggesting that the real case-fatality rate is quite a bit lower, and not everyone gets infected in a pandemic.
ETA: Also, fatalities in the 1918-1919 flu were worse in the poor parts of the world, and cryonics facilities are located in prosperous countries. There are also generic reasons to think that there are virulence-infectiousness tradeoffs that would shape the evolution of the virus. However, the recently reported lab-modified H5N1 experiments count as evidence against that (they were justified, despite the danger of revealing a bioterrorism method, as a source of evidence that an H5N1 pandemic would be highly virulent).
ETA2: And the flu experiments actually demonstrating that breeding the virus for airborne transmission reduced its lethality.
Additionally, the link in the OP is wrong. I followed it in hopes that Luke would provide a citation where I could see these estimates.
This was the quote I was referring to:
So, I often have a nagging worry that what I’m working on only seems like it’s reducing existential risk after the best analysis I can do right now, but actually it’s increasing existential risk. That’s not a pleasant feeling, but it’s the kind of uncertainty you have to live with when working on these kinds of problems. All you can do is try really hard, and then try harder.
I was referencing how it is difficult to effectively lead an organization that is so focused on the distant future and which must make so many difficult decisions.
I should have been clearer.
Oh! Well I feel stupid indeed. I thought that all the text after the sidenote was a quotation from Luke (which I would find at the link in said sidenote), rather than a continuation of Mike Darwin's statement. I don't know why I didn't even consider the latter.
On a related note, Carl Shulman has said that more widespread cryonics would encourage more long-term thinking, specifically about existential risk.
I suspect the sign is positive. I don't think pushing cryonics is anywhere near the efficiency frontier for disinterested altruism aimed at existential risk. If focused on current people, I think GiveWell donations would save more (save a kid from malaria, and they have a nontrivial chance of living to see a positive singularity/radical life extension to get there, plus GiveWell builds the effective altruism community and its capabilities). There are many, many things with positive expected impact relative to doing nothing, but that doesn't mean they are anywhere near as good as the best things.
A more plausible moralized case for it would be something like "I want cryopreservation to work for myself, which requires public good contribution of various kinds, so I will cooperate in this many-player iterated Prisoner's Dilemma." Instead of stretching to argue that pushing cryonics is really at the frontier, better to admit you want to do it for non-existential risk reasons, and buy it separately.
Thank you for the clarification of your stance. The best counterargument seems to be that brain preservation has the potential to save many more lives than are lost due to malaria, if properly implemented, and yet receives very little if no funding. For example, malaria research received 1.5 billion in funding in 2007, whereas one of the only studies explicitly designed as relevant to cryonics is still struggling to reach its modest goal of $3000 as I write this.
they have a nontrivial chance of living to see a positive singularity/radical life extension to get there
How you do define and estimate this probability?
plus GiveWell builds the effective altruism community and its capabilities
True. But donations to cryonics organizations build the effective brain preservation community and its capabilities, and once again we are back to the question of which has the higher marginal expected utility.
Instead of stretching to argue that pushing cryonics is really at the frontier, better to admit you want to do it for non-existential risk reasons
Fair enough, I'll defer to your expertise on existential risk.
they have a nontrivial chance of living to see a positive singularity/radical life extension to get there
How you do define and estimate this probability?
Life expectancy figures for young children (with some expectation of further health gains in Africa and other places with malaria victims in coming decades, plus emigration), combined with my own personal estimates of the probability of human-level AI/WBE by different times. I think such development more likely than not this century, and used that estimate in evaluating cryonics (although as we demand that cryonics organizations survive for longer and longer, the likelihood of success goes down). We can largely factor out the risk of such development going badly, since it is needed both for the vast lifespans of the malaria victims and for successful cryonics revivification.
We can save many malaria victims for each cryonics patient at current prices, which are actually less than the cost (due to charitable subsidies). Marginal costs could go down with scale, but there is a lot of evidence that it is difficult to scale up, and costs would need to fall a lot.
People saved from malaria can actively take care of themselves and preserve their own lives (and use life extension medicine if it becomes available and they are able to migrate to rich countries or benefit from local development), while cryonics patients have a substantial risk of not coming through due to organizational failure, flawed cryonics, conceptual error, cryonics bans, religious interference, etc.
Marginal costs could go down with scale, but there is a lot of evidence that it is difficult to scale up, and costs would need to fall a lot.
Would you mind going into details?
Plastination is a route that has been discussed but has had in my understanding zero research devoted to actually understanding whether it would work in humans for preserving personal identity. Ken Hayworth says that it could cost just a few thousand dollars.
Right now it may seem like there is no cheap route for effective brain preservation, but it is also clear that we as a species have not tried very hard to find out. Do you weigh that uncertainty in your calculations?
People saved from malaria can actively take care of themselves and preserve their own lives
One counter to this, which I do not necessarily endorse, is that people saved from malaria may also contribute to the world in negative ways, whereas preserved people are only likely to be revived if future society has good reason to believe that they will be a net positive.
ETA: Then again, I suppose there could also be strife and violence about the status of the preserved individuals, which actually might be worse in EV.
Would you mind going into details?
I was referring to the difficulty cryonics organizations have had in recruiting customers, and their slow growth. I was contrasting this to the rapid growth of cost-effectiveness oriented efforts in private charity in aid, and the sophistication and money moved of groups like GiveWell (with increasing billionaire support), Giving What We Can, Life You Can Save, etc.
Right now it may seem like there is no cheap route for effective brain preservation, but it is also clear that we as a species have not tried very hard to find out. Do you weigh that uncertainty in your calculations?
Yes, that's responsible for much of the EV in my mind.
One counter to this, which I do not necessarily endorse
I first talked about cryonics not being at the frontier of existential risk reduction, and then separately said that I thought GiveWell type donations would do better for preserving current people than cryonics. I don't think that marginal malaria cures have very large effects on existential risks, and was not making any claim about the sign (I am not very confident either way). I was trying to illustrate that for a variety of disinterested objective functions I was skeptical about cryonics promotion coming out on top, except in terms of the welfare of cryonicists (a motive I can strongly sympathize with).
ETA: I did not include plastination under the banner of cryonics (since it isn't cryonics, in terms of temperature, organizational structure, or technology). It looks more promising from a state of relative ignorance.
Plastination is a route that has been discussed but has had in my understanding zero research devoted to actually understanding whether it would work in humans for preserving personal identity. Ken Hayworth says that it could cost just a few thousand dollars.
For those interested, my notes on plastination: http://www.gwern.net/plastination
(Also, Darwin has provided me a lot of material on plastination I have lazily failed to get around to incorporating.)
This post convinced me that Darwin is a crank. If it because clear by 2016 that the 2011 economy was "fucked", then I will withdraw this comment and declare Darwin a prophet. But 2016 comes and we're still bartering with dollars instead of canned food, then it will be safe to say that Darwin is a paranoid fool.
Does that problem significantly inform you about accuracy of Darwin's position on the state and prospects of cryonics? If actual evidence doesn't get warped too much, pessimism might be a good (not great) attitude to successfully pick holes in convenient illusions, and evidence screens off occasional craziness. So the relevant query is about the actual summary of Darwin's case.
As I've pointed out, I think Darwin has a good track record when it comes to medicine predictions and cryonics forecasts in particular. (And with the former, his errors were more those of optimism than pessimism.)
As for his economics claims? I dunno. It seems pretty clear to me that marginal returns are shrinking in science & tech (consistent with his claims that we are not in a long-run sustainable situation), but this is an observation whose implications are very easy to overstate - the data is about about humans; that doesn't rule out advances in things like nanotech much less any regime shifts like uploads or AGIs, or give us any clear deadlines like '2 centuries' (diminishing returns seem clear in the Roman empire, too, centuries and maybe millennia before the final fall of Constantinople).
This may be a case where we could say with Napoleon, "the real truths of history are hard to discover. Happily, for the most part, they are rather matters of curiosity than of real importance."
On a related note, Carl Shulman has said that more widespread cryonics would encourage more long-term thinking, specifically about existential risk. Is it a consensus view that this would be the case?
I've heard similar claims made about reproduction — with one political form being the argument that non-parents should not be permitted to vote, since parents would have more concern for the long term.
This strikes me as highly pessimistic, but given the planning fallacy, possibly a nevertheless prudent viewpoint.
I’m not being trite when I say that people do NOT like reality.... Life is scary and hard, in fact, it is absolutely terrifying if looked at objectively[...]
Is it me or did Mike Darwin say the exactly wrong thing here? (Considering LW's stance on harsh truths.)
I don't know why you would think this, unless you know of studies on the topic? Given what I've read about Cognitive Behavioral Therapy I'd feel at least mildly surprised by this result. Perhaps you exaggerated for effect?
Sorry, I have overreacted.
What I should have written is: "Just because someone is reciting Litany of Gendlin, it does not mean they like reality." Some people only say it because it is cool, and even those who take it seriously are probably still disliking reality enough that whatever is written in the article applies to them too.
Every cause wants to be a cult: simply claiming to want to face reality as it is is rather weak evidence about actually working for it, or even actually wanting to. Many, if not most, people claim to want to face reality as it is, even when they actually don't. "This group doesn't actually want to face reality" is the overwhelming default expectation for any group, which requires much stronger evidence than claims of the contrary to be seriously doubted.
That's the outside view for LW. As for the inside view, well, I might be overgeneralizing from one example too much, but from introspection and noting how easily I myself still often flinch from reality, I think most LWers still aren't ready to truly face reality either.
He has resumed posting at his blog Chronopause and he is essential reading for those interested in cryonics and, more generally, rational decision-making in an uncertain world.
In response to a comment by a LW user named Alexander, he writes:
(Sidenote: This reminds me of what Luke considers his most difficult day-to-day tasks.)
On a related note, Carl Shulman has said that more widespread cryonics would encourage more long-term thinking, specifically about existential risk. Is it a consensus view that this would be the case?
Every now and then people ask LW what sort of career they should pursue if they want to have a large impact improving the world. If we agree that cryonics would encourage long-term thinking, and that this would be beneficial, then it seems to me that we should push some of these people towards the research and practice of brain preservation. For example, perhaps http://80000hours.org/search?q=cryonics should have some results.