It is often asserted that time will be more valuable and higher leverage in "crunch time," shortly before critical models are deployed.

This brief post provides a starting point for consideration of three questions:

  1. What is more time (before dangerous AI) good for?
  2. What is different near the end-- what makes actions higher (or lower) leverage, or makes adding time more (or less) valuable?
  3. How does buying time now trade off with buying time later?

What is more time good for?

  • Technical AI safety research
  • AI strategy and governance research
  • Governance happening
    • Governments responding to AI is not obviously good but seems positive in expectation, both absolutely and on the margin
  • Paying the alignment tax (if the critical model that needs to be aligned exists)
  • Relevant actors' attitudes become closer to truth and their actions become closer to their beliefs
    • Relevant actors here include labs, governments, and the ML community
    • Related: more time to deal with emerging capabilities
  • Field-building, community-building, and resource/influence-gaining

What is different later?

  • AI models are more powerful, boosting technical AI safety research
  • There is more strategic clarity, especially insofar as you recognize that you're near the end
    • There is more clarity about what critical models will look like, helping prioritization in technical AI safety research
    • Maybe strategy and governance interventions and prioritization improve
  • Attitudes are different; in particular, actors are more aware of AI and AI risk and are trying to take action
    • E.g., governments are trying to do stuff
    • The space is more crowded; more total influence is exerted on AI
  • Labs can actually pay the alignment tax
  • The AI safety field is larger (and maybe more influential)
  • Maybe the world looks much crazier and changes much faster
  • Maybe new windows of opportunity open, in terms of actors' actions

Many differences depend on whether actors realize that we're near the end.

Roughly, crunch time is more valuable and higher leverage than the time before it, but this depends on the particular goal at issue and the exact difference is unclear even for particular goals.

How does buying time now trade off with buying time later?

Largely it doesn't. For example, if there existed policy regimes that slow dangerous AI, that would mostly make similar policy regimes more likely to exist in the future.

In some cases, you can choose to prioritize slowing now or preparing to slow later, in your work and influence-exerting.

Insofar as leading labs can choose to burn their lead now or later, burning it now prevents them from burning it later.

Slowing leading labs is roughly necessary and sufficient for slowing AI. Some possible ways of slowing leading labs would not slow other labs, or would slow them less.[1] Preserving lead time (or equivalently, avoiding increasing multipolarity) among labs is good because it helps leading labs slow down later, makes coordination among leading labs easier, and may decrease racing (but how "racing" works is unclear). If you're slowing AI now, try to preserve lead time (or equivalently, avoid increasing multipolarity) to avoid trading off with slowing AI later.[2]


This post expands on "Time is more valuable near the end" in "Slowing AI: Foundations."

Thanks to Olivia Jimenez for discussion.

  1. ^

    For example, temporarily banning training runs larger than 10^25 FLOP would let non-leading labs catch up, so it would differentially slow leading labs. On the other hand, increasing the cost of compute would slow all labs roughly similarly, and decreasing the diffusion of ideas would differentially slow non-leading labs.

  2. ^

    Sidenote: for policy proposals like "mandatory pause on large training runs for some time," it's not obvious how much this slows dangerous AI progress, nor how much it differentially slows the leading labs (burning their lead time).

New Comment
1 comment, sorted by Click to highlight new comments since:

Roughly, crunch time is more valuable and higher leverage than the time before it, but this depends on the particular goal at issue and the exact difference is unclear even for particular goals.

This would potentially imply that [finding ways to just have more crunch time] would also be worth researching ahead of time. It's a difficult kind of future to forecast, but it could be incredibly valuable if someone, right now, successfully thinks of of a reasonable way to keep crunch time at a stable state for a very long time.