Do you have existing ones you recommend?
I'd been working on a keylogger / screenshot-parser that's optimized for a) playing nicely will LLMs while b) being unopinionated about what other tools you plug it into. (in my search for existing tools, I didn't find keyloggers that actually did the main thing I wanted, and the existing LLM-tools that did similar things were walled-garden-ecosystems that didn't give me much flexibility on what I did with the data)
Rapid AI progress is the greatest driver of existential risk in the world today. But — if handled correctly — it could also empower humanity to face these challenges.
Executive summary
1. Some AI applications will be powerful tools for navigating existential risks
Three clusters of applications are especially promising:
2. We can accelerate these tools instead of waiting for them to emerge
Implications
These opportunities seem undervalued in existential risk work. We think a lot more people should work on this — and the broader “differential AI development” space. Our recommendations:
Some AI applications will help navigate existential risks
Epistemic applications
People are more likely to handle novel challenges well if they can see them coming clearly, and have good ideas about what could be done about them.
Examples of promising epistemic applications[1]
Coordination-enabling applications
Local incentives sometimes prevent groups from achieving outcomes that would benefit everyone. This may make navigating key challenges — for example, coordinating to go slow enough with AI development that we can be justifiably confident it is safe — extremely difficult. Some AI applications could help people to coordinate and avoid such failures.
Better coordination tools also have the potential to cause harm. Notably, some tools could empower small cliques to gain and maintain power at the expense of the rest of society. And commitment tools in particular are potentially dangerous, if they lead to a race to extort opposition by credibly threatening harm; or if humanity “locks in” certain choices before we are really wise enough to choose correctly.[2]
Risk-targeted applications
If these areas are automated early enough relative to the automation of research into AI capabilities, safety techniques might keep up with increasingly complex systems. This could make the difference in whether we lose control of the world to misaligned power-seeking AI systems.[3]
Other applications?
Applications outside of these three categories might still meaningfully help. For instance, if food insecurity increases the risk of war, and war increases the risk of existential catastrophe, then AI applications that boost crop production might indirectly lower existential risk.
But we guess that the highest priority applications will fall into the categories listed above,[4] each of which focuses on a crucial step for navigating looming risks and opportunities:
We can accelerate helpful AI tools
There’s meaningful room to accelerate some applications
To some extent, market forces will ensure that valuable AI applications are developed not too long after they become viable. It’s hard to imagine counterfactually moving a key application forward by decades.
But the market has gaps, and needs time to work. AI is a growth industry — lots of money and talent is flowing in because the available opportunities exceed the degree to which they’re already being taken. So we should expect there to be some room to counterfactually accelerate any given application by shifting undersupplied capital and labour towards it.
In some cases this room might be only a few months or weeks. This is especially likely for the most obviously economically valuable applications, or those which are “in vogue”. Other applications may be less incentivized, harder to envision, or blocked by other constraints. It may be possible to accelerate these by many months or even years.
Moreover, minor differences in timing could be significant. Even if the speed-up we achieve is relatively small or the period during which the effects of our speed-up persist is short, the effects could matter a lot.
This is because, at a time of rapid progress in AI:
Achieving risk-reducing capabilities before[5] the risk-generating capabilities they correspond to could have a big impact on outcomes
There are promising strategies for accelerating specific AI applications
We have promising strategies that focus on almost all[6] of the major inputs in the development of an AI application.
1. Invest in the data pipeline (including task-evaluation)
High-quality task-specific data is crucial for training AI models and improving their performance on specific tasks (e.g. via fine-tuning), and it’s hard to get high-quality data (or other training signals) in some areas. So it could be very useful to:
So it may be high-leverage to develop evaluation schemes for performance on tasks we care about[7]
2. Work on scaffolding and post-training enhancements
Techniques like scaffolding can significantly boost pre-trained models’ performance on specific tasks. And even if the resulting improvement is destined to be made obsolete by the next generation of models, the investment could be worth it if the boost falls during a critical period or create compounding benefits (e.g. via enabling faster production of high-quality task-relevant data).
3. Shape the allocation of compute
As R&D is automated, choices about where compute is spent will increasingly determine the rate of progress on different applications.[8] Indeed, under inference paradigms this is true more broadly than just for R&D — larger compute investment may give better application performance. This means it could be very valuable to get AI company leadership, governments, or other influential actors on board with investing in key applications.
4. Address non-AI barriers
For some applications, the main bottleneck to adoption won’t be related to underlying AI technologies. Instead of focusing on AI systems, it might make sense to:
Different situations will call for different strategies. The best approach will be determined by:
The most effective implementation of one of these strategies won’t always be the most direct one. For instance, if high-quality data is the key bottleneck, setting up a prize for better benchmarks might be more valuable than directly collecting the data. But sometimes the best approach for accelerating an application further down the line will involve simply building or improving near-term versions of the application, to encourage more investment.
These methods can generally be pursued unilaterally. In contrast, delaying an application that you think is harmful might more frequently require building consensus. (We discuss this in more detail in an appendix.)
Implications for work on existential risk reduction
Five years ago, working on accelerating AI applications in a targeted way would have seemed like a stretch. Today, it seems like a realistic and viable option. For the systems of tomorrow, we suspect it will seem obvious — and we’ll wish that we’d started sooner.
The existential risk community has started recognizing this shift, but we don’t think it’s been properly priced in.
This is an important opportunity — as argued above, some AI applications will help navigate existential risks and can be meaningfully accelerated — and it seems more tractable than much other work. Moreover, as AI capabilities rise, AI systems will be responsible for increasing fractions of important work — likely at some point a clear majority. Shaping those systems to be doing more useful work seems like a valuable (and increasingly valuable) opportunity for which we should begin preparing for now.
We think many people focused on existential risk reduction should move into this area. Compared to direct technical interventions, we think this will often be higher leverage because of the opportunity to help direct much larger quantities of cognitive labour, and because it is under-explored relative to its importance. Compared to more political interventions, it seems easier for many people to contribute productively in this area, since they can work in parallel rather than jostling for position around a small number of important levers.[9] By the time these applications are a big deal, we think it could easily make sense for more than half of the people focusing on existential risk working on related projects. And given how quickly capabilities seem to be advancing, and the benefits of being in a field early, we think a significant fraction — perhaps around 30% — of people in the existential risk field should be making this a focus today.
What might this mean, in practice?
1. Shift towards accelerating important AI tools
Speed up AI for current existential security projects
If you're tackling an important problem[10], consider how future AI applications could transform your work. There might already be some benefits to using AI[11] — and using AI applications earlier than might seem immediately useful could help you to learn how to automate the work more quickly. You could also take direct steps to speed up automation in your area, by:
Work on new AI projects for existential security
You might also accelerate important AI applications by:
2. Plan for a world with abundant cognition
As AI automates more cognitive tasks, strategies that were once impractically labour-intensive may become viable. We should look for approaches that scale with more cognitive power, or use its abundance to bypass other bottlenecks.
Newly-viable strategies might include, e.g.:
The other side of this coin is that some current work is likely to soon be obsolete. When it’s a realistic option to just wait and have it done cheaply later, that could let us focus on other things in the short term.
3. Get ready to help with automation
Our readiness-to-automate isn’t a fixed variable. If automation is important — and getting more so — then helping to ensure that the ecosystem as a whole is prepared for it is a high priority.
This could include:
Further context
In appendices, we discuss:
Acknowledgements
We’re grateful to Max Dalton, Will MacAskill, Raymond Douglas, Lukas Finnveden, Tom Davidson, Joe Carlsmith, Vishal Maini, Adam Bales, Andreas Stuhlmüller, Fin Moorhouse, Davidad, Rose Hadshar, Nate Thomas, Toby Ord, Ryan Greenblatt, Eric Drexler, and many others for comments on earlier drafts and conversations that led to this work. Owen's work was supported by the Future of Life Foundation.
See e.g. Lukas Finnveden’s post on AI for epistemics for further discussion of this area.
There is a bit more discussion of potential downsides in section 5 of this paper: here.
This strategy has been discussed in many places, e.g. here, here, here, and here.
Not that there is anything definitive about this categorization; we’d encourage people to think about what’s crucial from a variety of different angles.
And note it’s not just about the ordering of the capabilities, but about whether we have them in a timely fashion so that systems that need to be built on top of them actually get built.
The main exceptions are learning algorithms and in most cases architectures, which are typically too general to differentially accelerate specific applications.
In some cases, optimizing for a task metric may result in spillover capabilities on other tasks. The ideal metric from a differential acceleration perspective is one which has less of this property; although some spillover doesn’t preclude getting differential benefits at the targeted task.
AI companies are already spending compute on things like generating datasets to train or fine-tune models with desired properties and RL for improving performance in specific areas. As more of AI R&D is automated (and changing research priorities becomes as easy as shifting compute spending), key decision-makers will have more influence and fine-grained control on the direction of AI progress.
This work may also be more promising than policy-oriented work if progress in AI capabilities outpaces governments’ ability to respond
Although you should also be conscious that “which problems are important” may be changing fairly rapidly!
As we write this (in March 2025) we suspect that a lot of work is on the cusp of automation — not that there are obvious huge returns to automation, but that there are some, and they’re getting bigger over time.