AI Strategy Nearcasting

Author: @HoldenKarnofsky

This is series of pieces taking a stab at dealing with a conundrum:

  • I believe this could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
  • But when it comes to what actions we can take to help such a development go well instead of poorly, it’s hard to say much (with a few exceptions). This is because many actions that would be helpful under one theory of how things will play out would be harmful under another (for example, see my discussion of the “caution” frame vs. the “competition” frame).

It seems to me that in order to more productively take actions (including making more grants), we need to get more clarity on some crucial questions such as “How serious is the threat of a world run by misaligned AI?” But it’s hard to answer questions like this, when we’re talking about a development (transformative AI) that may take place some indeterminate number of decades from now.

This piece introduces one possible framework for dealing with this conundrum. The framework is AI strategy nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that key events (e.g., the development of transformative AI) will happen in a world that is otherwise relatively similar to today's. One (but not the only) version of this assumption would be “Transformative AI will be developed soon, using methods like what AI labs focus on today.”

The term is inspired by nowcasting. For example, the FiveThirtyEight Now-Cast projects "who would win the election if it were held today,” which is easier than projecting who will win the election when it is actually held. I think imagining transformative AI being developed today is a bit much, but “in a world otherwise relatively similar to today’s” seems worth grappling with.