Having paid a moderate amount of attention to threats to the human species for over a decade, I've run across an unusually good thinker with expertise unusually suited to helping with many threats to the human species, that I didn't know about until quite recently.

I think he warrants more attention from people thinking seriously about X-risks.

David C Denkenberger's CV is online and presumably has a list of all his X-risks relevant material mixed into a larger career that seems to have been focused on energy engineering.

He has two technical patents (one for a microchannel heat exchanger and another for a compound parabolic concentrator) and interests that appear to span the gamut of energy technologies and uses.

Since about 2013 he has been working seriously on the problem of food production after a sun obscuring disaster, and he is in Lesswrong's orbit basically right now.

This article is about opportunities for intellectual cross-pollination!

Appearances On Or Near Lesswrong In The Past

On 2016-05-10 RyanCarey posted Improving long-run civilisational robustness and mentioned Denkenberger as one of the main authors in the literature of shelter construction after really major disasters, with a special interest in the food production that would happen in such facilities.

On 2017-08-24 ChristianKl posted [Link] Nasas ambitious plan to save earth from a supervolcano in whose comments turchin mentioned Denkenberger as a relevant expert.

Slightly farther afield but very recent and still nearby, on 2017-09-14 Robin Hanson posted Prepare for Nuclear Winter which was a very abstract and formal exhortation to care about global food production in the event of a sun obscuring disaster, until the final sentence where he called attention to "ALLFED" which is very dense with citations to papers on the subject, and basically every paper has Denkenberger as a co-author.

In the last few hours I saw Denkenberger working his way through comments on Hanson's couple-day-old posts, correcting the factual mistakes in people's comments with links pointing to papers that contain the correct information.

There is probably an intellectual opportunity to get Denkenberger's attention and help Lesswrong get smarter about an important sub area related to the mitigation of existential risks.

A Generic Solution To Many Classes Of Risk

One of the long term deep insights on X-risks that is somewhat unique to Lesswrong is the idea that specific disaster scenarios often seem more plausible and more likely to people when additional details are added, but logically speaking each burdensome detail actually makes the scenario LESS likely.

Also, once a detailed scenario is accepted as worryingly plausible by an audience, the natural tendency is to find solutions that address that single scenario...

"We will solve the asteroid problem by shooting lasers at the asteroid before it hits!"

"We will solve the AI problem by making the AI intrinsically motivationally safe!"

"We will solve the nanotech problem by building a planetary immune system out of better nanotech!"

Once you really feel the problems of burdensome details in your bones, it becomes clear that the cost effective solution to many such problems is plausibly the construction of a SINGLE safety measure that addresses all or almost all of the problems in a single move.

(Then perhaps build a second such solution that is orthogonal to the first. And so on, with a stack of redundant and highly orthogonal highly generic solutions, any one of which might be the only thing that works in any given disaster, and which does the job all by itself.)

One obvious candidate for such a generic cost effective safety intervention is a small but fully autonomous city on mars, or antarctica, or the moon, or under the ocean (or perhaps four such cities, just in case) that could produce food independently of the food production system traditionally used on the easily habitable parts of Earth.

The more buffered and self sufficient such a city was, the better it would be from a generic safety perspective.

It appears to me that Denkenberger's work is highly relevant to such a project, and for this reason deserves our attention.

Followups

I'm thinking it might be interesting to start a bunch of comment threads below, one for each of Denkenberger's papers that can tracked down, that could be discussed and voted on independently.

Also, if Denkenberger himself is interested in having me correct errors in this article or put a prominent message written by himself somewhere here at the bottom or the top, I'm open to that.

Another thought would be to try to schedule an AMA for some day in the future, and link to that from here?

New to LessWrong?

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 6:53 PM

I volunteer to contact him and invite him to AMA if there is interest in the idea. Vote or comment to register interest. 10upv will do it. If you see >10upv comment if I have missed it.

I am happy to do an AMA.

Thank you, Jennifer, for the introduction. Some more background on me: I have read the sequences and the foom debate. In 2011, I tried to do cost-effectiveness scoping for all causes inspired by Yudkowsky's scope and neglectedness framework (the scope, neglectedness, and tractability framework had not yet been invented). I am concerned about AI risk, and have been working with Alexey Turchin. I am primarily motivated by existential risk reduction. If we lose anthropological civilization (defined by cooperation outside the clan), we may not recover for the following reasons:

• Easily accessible fossil fuels and minerals exhausted

• Don’t have the stable climate of last 10,000 years

• Lose trust or IQ permanently

• Endemic disease prevents high population density

• Permanent loss of grains precludes high population density

Not recovering is a form of existential risk (not realizing our potential), and we might actually go extinct because of a supervolcano or asteroid after losing civilization. Because getting prepared (research and development of non-sunlight dependent foods such as mushrooms and natural gas digesting bacteria, and planning) is so cost-effective for the present generation, I think it will be a very cost effective way of reducing existential risk.

Why should there be a permanent loss of grains? It seems to me like reserve seeds are stored in many different places with some of those places getting forgotten in the time of a catastrophe and people rediscovering the contents later.

Grains are all from the same family-grass. It is conceivable that a malicious actor could design a pathogen(s) that kills all grains. Or maybe it would become an endemic disease that would decrease the vigor of the plants permanently. I'm not arguing that any of these non-recovery scenarios are too likely. However, if together they represent 10% probability, and if there is a 10% probability of the sun being blocked this century, and a 10% probability of civilization collapsing if the sun is blocked, this would be a one in 1000 chance of an existential catastrophe from agricultural catastrophes this century. This is worth some effort to reduce.

Thanks for posting!

I haven't read your book yet but I find your work pretty interesting. I hope you won't mind a naive question... you've mentioned non-sunlight-dependent foods like mushrooms and leaf tea. Is it actually possible for a human to survive on foods like this? Has anybody self-experimented with it?

By my calculation, a person who needs 1800 kcals/day would have to eat about 5 kg of mushrooms. Tea (the normal kind, anyway) doesn't look any better.

Bacteria fed by natural gas seems like a very promising food source--and one that might even be viable outside of catastrophe scenarios. Apparently it's being used for fish feed already.

Here is an analysis of nutrition of a variety of alternate foods. Leaf protein concentrate is actually more promising than leaf tea. No one has tried a diet of only alternate foods - that would be a good experiment to run. With a variety, the weight is not too high. Yes, we are hoping that some of these ideas will be viable present day, because then we can get early investment.

What's the advantage of alternative foods? In the context of an agricultural catastrophe, presumably you'd want to maximize the calories per whatever resource is the bottleneck (might be arable land, might be energy, might be something else). I can see mushrooms being useful, but leaves are not likely to be particularly efficient in this respect, would they?

In the case of the sun being blocked by comet impact, super volcanic eruption, or full-scale nuclear war with the burning of cities, there would be local devastation, but the majority of global industry would function. Most of our energy is not dependent on the sun. So it turns out the biggest problem is food, and arable land would not be valuable. Extracting human edible calories from leaves would only work for those leaves that were green when the catastrophe happened. They could provide about half a year of food for everyone, or more realistically 10% of food for five years.

I also work on the catastrophes that could disrupt electricity globally, such as an extreme solar storm, multiple high-altitude detonations of nuclear weapons around the world creating electromagnetic pulses (EMPs), and a super computer virus. Since nearly everything is dependent on electricity, this means we lose fossil fuel production and industry. In this case, energy is critical, but there are ways of dealing with it. So the food problem still turns out to be quite important (the sun is still shining, but we don't have fossil fuel based tractors, fertilizers and pesticides), though there are solutions for that.

sun being blocked by comments impact

Extracting human edible calories from leaves would only work for those leaves that were green when the catastrophe happened. They could provide about half a year of food for everyone

What kind of industrial base that will continue to function in the catastrophe's aftermath do you expect to be able to collect and process these green leaves while they are still green -- on the time scale of weeks, I assume?

And what is the advantage over having large stores of non-perishables?

Also, it's my impression that the biggest problem with avoiding famines is not food production, but rather logistics -- storage, transportation, and distribution. Right now the world has more then enough food for everyone, but food shortages in the third world, notably Africa, are common.

In the catastrophe scenario you have to assume political unrest, breakdown of transportation networks, etc.

To me it seems politically unfeasible to pay for the creation of a multi-year storage of non-perishable food.

Governments do it all the time -- see e.g. this. Also, in this context feasability is relative -- how politically feasible is it to construct emergency-use-only machinery to gather and process leaves from a forest?

I'm also uncertain about the gathering-leaves plan.

On the other hand I could imagine solutions that are easily scalable. If you would for example have an eatable fungi that you could feed with lumber that might be very valuable and you don't need to spend billions ,

Sorry for my voice recognition software error-I now have fixed it. It turns out that if you want to store enough food to feed 7 billion people for five years, it would cost tens of trillions of dollars. What I am proposing is spending tens of millions of dollars for targeted research and development and planning. The idea is that we would not have to spend a lot of money on emergency use only machinery. I use the example of the United States before World War II-it hardly produced any airplanes. But once it entered World War II, it retrofitted the car manufacturing plants to produce airplanes very quickly. I am targeting food sources that could be ramped up very quickly with not very much preparation (in months, see graph here. The easiest killed leaves (for human food) to collect would be agricultural residues with existing farm equipment. For leaves shed naturally (leaf litter), we could release cows into forests. I also analyze logistics in the book, and it would be technically feasible. Note that these catastrophes would only destroy regional infrastructure. However, the big assumption is that there would still be international cooperation. Without these alternative food sources, most people would die, so it would likely be in the best interest of many countries to initiate conflicts. However, if countries knew that they could actually benefit by cooperating and trading and ideally feed everyone, cooperation is more likely (though of course not guaranteed). So you could think of this as a peace project.

I met Denkenberger at the same ALLFED workshop that Hanson participated in (as a part of the GoCAS research program on existential risk); I also thought his work was quite impressive and important.

I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?

I've had periods when I described myself as pretty close to pure-NU, but currently I view myself as a moral parliamentarian: my values are made up of a combination of different moral systems, of which something like NU is just one. My current (subject to change) position is to call myself "NU-leaning prioritarian": I would like us to survive to colonize the universe eventually, just as long as we cure suffering first.

(Also it's not clear to me that this kind of an operation would be a net negative even on pure NU grounds; possibly quite non-effective, sure, but making it negative hinges on various assumptions that may or may not be true.)

(Then perhaps build a second such solution that is orthogonal to the first. And so on, with a stack of redundant and highly orthogonal highly generic solutions, any one of which might be the only thing that works in any given disaster, and which does the job all by itself.)

This is excellent! Can this reasoning be improved by attempting to map the overlaps between x-risks more explicitly? The closest I can think of is some of turchin's work.

My pretty limited understanding is that this is a fairly standard safety engineering approach.

If you were going to try to make it just a bit more explicit a spreadsheet might be enough. If you want to put serious elbow grease into formal modeling work I think a good keyword to get into the literature might be "fault trees". The technique came out of Bell Labs in the 1960's but I think it really came into its own when it was used to model nuclear safety issues in the 1980's? There's old Nuclear Regulatory Commission work that got pretty deep here I think.

Yes, here is a fault tree analysis of nuclear war. And here is one for AI.

The Future of Humanity Institute recently hosted a workshop on the focus of Dr. Denkenberger's research called ALLFED.

At our LessWrong community camp the keynote was given by Josh Hall and he talked about why we don't have flying cars. He made a convincing case that the problem is that while in the 50's where people predicted flying cars energy costs had been getting cheaper every year. Since the 70's they didn't and thus the energy that's required for flying cars is too expensive.

He then went to say that the same goes for underwater cities.

If we would have cheap energy we would have no problem growing food indoors with LEDs. Currently that only makes economic sense for marijuana and some algea that produces high quality nutrients. Indoors growing has the advantage that you need less pesticides when you can control the environment better.

It seems to me like next generation nuclear that has the potential to produce more energy for a cheaper price would help with making us independent of the sun.

It meshes well with Peter Thiel, Bill Gates and Sam Altman all having invested money into nuclear solutions.

This could potentially help many decades in the future. But it would need to be an order of magnitude or more reduction in energy costs for this to produce a lot of food. And I am particularly concerned about one of these catastrophes happening in the next decade.

"One obvious candidate for such a generic cost effective safety intervention is a small but fully autonomous city on mars, or antarctica, or the moon, or under the ocean (or perhaps four such cities, just in case) that could produce food independently of the food production system traditionally used on the easily habitable parts of Earth."

That sort of thing might improve the odds for the human race, but it doesn't sound like it would do much for the average person who already exists.

Correct. Once you're to the point of planning for these kinds of contingencies you're mostly talking about the preservation of the spark of human sentience at all in what might otherwise turn out to be a cold and insentient galaxy.

I have done some work on refuges. However, I am most interested in saving nearly everyone and preventing the loss of civilization. This turns out to be cost effective even if one only cares about the present generation. I am currently working on cost effectiveness from a far future perspective.