denkenberger

Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4400 citations, >50,000 downloads, h-index = 34, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

Wiki Contributions

Comments

This is a tricky thing to define, because by some definitions we are already in the 5 year count-down on a slow takeoff.

Some people advocate for using GDP, so the beginning is if you can see the AI signal in the noise (which we can't yet).

Nuclear triad aside, there's the fact that the Arctic is more than 1000 miles away from the nearest US land (about 1700 miles away from Montana, 3000 miles away from Texas), that Siberia is already roughly as close.

Well, there’s Alaska, but yes, part of Russia is only ~55 miles away from Alaska, so the overall point stands that Russia having a greater presence in the Arctic doesn't change things very much.

And of course, the fact the Arctic is made of, well, ice, that melts more and more as the climate warms, and thus not the best place to build a missile base on.

That’s not what is being proposed - it is building more bases in ports on the land where the water doesn’t freeze as much because of climate change.


 

If negative effects are worse than expected, it can't be reversed.

I agree that MCB can be reversed faster, but still being able to reverse in a few years is pretty responsive. There are strong interactions with other GCRs. For instance, here's a paper that argues that if we have a catastrophe like an extreme pandemic that disrupts our ability to do solar radiation management (SRM), then we could have a double catastrophe of rapid warming and the pandemic. So this would push towards more long-term SRM, such as space systems. However, there are also interactions with abrupt sunlight reduction scenarios such as nuclear winter. In this case, we would want to be able to turn off the cooling quickly. And having SRM that can be turned off quickly in the case of nuclear winter could make us more resilient to nuclear winter than just reducing CO2 emissions.

Nice summary! My subjective experience participating as an expert was that I was able to convince quite a few people to update towards greater risk by giving them some considerations that they had not thought of (and also by clearing up misinterpretations of the questions). But I guess in the scheme of things, it was not that much overall change.

What I wanted was a way to quantify what fraction of human cognition has been superseded by the most general-purpose AI at any given time. My impression is that that has risen from under 1% a decade ago, to somewhere around 10% in 2022, with a growth rate that looks faster than linear. I've failed so far at translating those impressions into solid evidence.

This is similar to my question of what percent of tasks AI is superhuman at. Then I was thinking if we have some idea what percent of tasks AI will become superhuman at in the next generation (e.g. GPT5), and how many tasks the AI would need to be superhuman at in order to take over the world, we might be able to get some estimate of the risk of the next generation.

I agree that indoor combustion producing small particles that go deep into the lungs is a major problem, and there should be prevention/mitigation. But on the dust specifically, I was hoping to see a cost-benefit analysis. Since most household dust is composed of relatively large particles, they typically do not penetrate beyond the nose and throat, and so are more of an annoyance than something that threatens your life. So I am skeptical if one doesn’t have particular risk factors such as peeling lead paint or allergies, measures such as regular dusting (how frequently are you recommending?), not wearing shoes in the house, having hardwood floors if you like the benefits of carpet such as sound absorption, etc would be cost-effective when you value people’s time.

Recall that GPT2030 could do 1.8 million years of work[8] across parallel copies, where each copy is run at 5x human speed. This means we could simulate 1.8 million agents working for a year each in 2.4 months.

You point out that human intervention might be required every few hours, but with different time zones, we could at least have the GPT working twice as many hours a week as humans, so that would imply ~1 month above. As for the speed now, you say about the same to three times as fast for thinking. You point out that it also does writing, but it is verbose. However, for solving problems like that coding interview, it does appear to be an order of magnitude faster already (and this is my experience solving physical engineering problems).

AI having scope-sensitive preferences for which not killing humans is a meaningful cost

Could you say more what you mean? If the AI has no discount rate, leaving Earth to the humans may require within a few orders of magnitude 1/trillion kindness. However, if the AI does have a significant discount rate, then delays could be costly to it. Still, the AI could make much more progress in building a Dyson swarm from the moon/Mercury/asteroids with their lower gravity and no atmosphere, allowing the AI to launch material very quickly. My very rough estimate indicates sparing Earth might only delay the AI a month from taking over the universe. That could require a lot of kindness if they have very high discount rates. So maybe training should emphasize the superiority of low discount rates?

I think "50% you die" is more motivating to people than "90% you die" because in the former, people are likely to be able to increase the absolute chance of survival more, because at 90%, extinction is overdetermined.

When asked on Lex’s podcast to give advice to high school students, Elezier’s response was “don’t expect to live long.”

Not to belittle the perceived risk if one believes in 90% chance of doom in the next decade, but even if one has a 1% chance of an indefinite lifespan, the expected lifespan of teenagers now is much higher than previous generations. 

Load More