Summary/Introduction

Aschenbrenner’s ‘Situational Awareness’ (Aschenbrenner, 2024) promotes a dangerous narrative of national securitisation. This narrative is not, despite what Aschenbrenner suggests, descriptive, but rather, it is performative, constructing a particular notion of security that makes the dangerous world Aschenbrenner describes more likely to happen.

This piece draws on the work of Nathan A. Sears (2023), who argues that the failure to sufficiently eliminate plausible existential threats throughout the 20th century emerges from a ‘national securitisation’ narrative winning out over a ‘humanity macrosecuritization narrative’. National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty. Sears uses a number of examples to show that when issues are constructed as issues of national security, macrosecuritization failure tends to occur, and the actions taken often worsen, rather than help, the issue.

This piece argues that Aschenbrenner does this. Firstly, I explain (briefly and very crudely) what securitisation theory is and how it explains the constructed nature of security. Then, I explain Sears (2023)’s main thesis on why Great Powers fail to combat existential threats. This is followed by an explanation of how Aschenbrenner’s construction of security seems to be very similar to the most dangerous narratives examined by Sears (2023) by massively favouring national security. Given I view his narrative as dangerous, I then discuss why we should care about Aschenbrenner’s project, as people similar to him have been impactful in previous securitisations. Finally, I briefly discuss some more reasons why I think Aschenbrenner’s project is insufficiently justified, especially his failure to adequately consider a pause, the fact he is overly pessimistic about international collaboration whilst simultaneously overly optimistic that AGI wouldn’t lead to nuclear war, .

There is lots I could say in response to Aschenbrenner, and I will likely be doing more work on similar topics. I wanted to get this piece out fairly quickly, and it is already very long. This means some of the ideas are a little crudely expressed, without some of the nuances thought out; this is an issue I hope future work will address. This issue is perhaps most egregious in Section 1, where I try to explain and justify securitisation theory very quickly, and if you want a more nuanced, in depth and accurate description of securitisation theory, (Buzan, Wæver and de Wilde, 1998) is probably the best source. Moreover, much of the piece does rely on other sources, so not every idea is justified by the evidence just presented in this short post; I am happy to refer you to where in the sources the justifications are

Section 1- What is securitisation

Everything we care about is mortal. Ourselves, our families, our states, our societies and our entire species. Threats can come that threaten each of these. In response, we allow, and often expect, extraordinary measures to be taken to combat them. This takes different forms for different issues, with the measures taken, and the audience they must be legitimised to, different. With COVID, these measures involved locking us in our homes for months. With Islamic terrorism, these involved mass surveillance and detention without trial. With the threat of communism in Vietnam, these involved going to war. In each of these cases, and countless others, it can be considered that the issue has been ‘securitised’; they entered into a realm where extraordinary measures can be justified in order to ensure survival against a perceived existential threat.

In each of these examples, however, this was never inherent. Many diseases have failed to be securitised, and life has carried on as normal; indeed, many would reject that the sacrifices we made for COVID were even worth it. White nationalist terrorism in the USA never provoked the same level of surveillance and police response as Islamic terrorism. We might dispute that these threats were ever existential to the referent object; indeed, Vietnam turned communist, and within three decades, America had won the Cold War. Nonetheless, in each of these examples, and more, from the US invasion of Iraq to the Chechen wars to the treatment of illegal migration, issues have been elevated to a standard of ‘existential threat’. This allows them to gain precedence over other issues, and extraordinary measures that are rarely justified are suddenly acceptable, or perhaps even seen as entirely necessary; the normal rules of politics gets broken.

These lists of examples have hopefully highlighted how diverse ‘security’ issues can be, and the fact that what are matters of security is constructed, rather than objective. A toy example may further help understand this point. Imagine if country A builds a coal power plant near the border of country B, that on average kills 10 of country Bs citizens yearly from air pollution. The idea that country B bombing the powerplant is an expected response would be considered insane. However, if country A fired a missile into country B and killed 5 citizens, bombing the facility that launched the missile appears to be a live possibility. This highlights that we cannot simply take for granted what are matters of security, in this case, something that kills 5 citizens could be considered more of a security threat than something that kills 10 citizens. Rather, we must explain how issues are constructed as matters of security, and what the impact of that construction may be.

Securitisation theory tries to describe and explain this process. An issue becomes securitised when it is declared an existential threat to a referent object by a securitising actor. This referent object is in many cases, the state (such as in the routinely securitised military sector), but can be more diverse than that, including, most relevant to this, cases of macrosecuritization. This is when political units at a higher level than the state are the referent object of security. This could be the defence of a particular civilisation (eg defence of the West, of socialism, of Islam etc), or even, in the case we will discuss, the defence of all humanity. For more information on macrosecuritization, see (Buzan and Wæver, 2009).  The securitising actor is an actor with the authority (in the eyes of the audience) to carry out a successful securitising speech act. This existence of the existential threat would then justify or demand the carrying out of extraordinary measures, beyond the normal rules of politics, that provide a ‘way out’ from this threat. For this move to be successful, however, it needs to be accepted as so by the relevant audience that the justification is required to; thus, securitisation is intersubjective. If relevant actors perceive something as a matter of security by the carrying out of this speech act, that thing is securitised, and extraordinary measures that may have been impossible to legitimate before, become legitimate and perhaps even demanded. I don’t wish to deny that the material world plays a role in how easy it is to be securitised, but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.

Section 2: Sears 2023 - The macrosecuritization of Existential Threats to humanity

Sears (2023) attempts to assess previous examples of dealing with threats that may be perceived as ‘existential’ to humanity as a whole. After all, given security focuses on survival, it's odd how neglected existential threats to all of humanity have been under the analysis of securitisation theory, and odd how neglected securitisation theory has been in XRisk studies. Thus, Sears attempts to examine the empirical record to understand how, if at all, plausible existential threats to humanity have been securitised and how this links to whether effective action has been taken. The examples Sears uses are: International Control of Atomic Energy, Proliferation of nuclear weapons, biological weapons, ozone hole, nuclear winter, global warming, prohibition of nuclear weapons, artificial intelligence, climate emergency, biodiversity loss. In each of these cases, Sears looks at attempts to carry out ‘macrosecuritization’ where these issues are constructed as existential threats to the whole of humanity which requires extraordinary measures to defend all of humanity from the threats. However, as discussed, securitization does not always, or even mostly, succeed, and Sears in particular focuses on why the international community often fails to take these threats as seriously as we might think they ought to.

Sears sees each case of ‘macrosecuritization failure’ occurring in cases where the Great Powers fail to reach consensus that there exists an existential threat to all of humanity that demands extraordinary measures to defend humanity, and that this is a genuine issue of security that takes precedence over other concerns. Running through each empirical example of significant macrosecuritization failure, is a failure of the ‘humanity macrosecuritization’ logic to win out over ‘national securitisation’ logic. The idea is that in each of these cases of existential threats to humanity, two securitisation narratives were at play. The first emphasised the potential for the technology or issue to pose a threat to all of humanity; humanity, not the nation, was the referent object, and thus measures needed to be taken globally to protect humanity as a whole. The second narrative was one of national securitisation, emphasising the survival of the nation, and thus extraordinary measures were needed to compete in an international power competition and a fight for supremacy. So, for example, great powers fail to come to the consensus that control over atomic energy should be internationalised and nuclear weapons decommissioned and not built, instead seeing the perceived existential threat of losing out in a possible (and self-fulfilling) arms race as more important than reducing the perceived existential threat to humanity of nuclear weapons.

Over various issues, and at various times, different aspects of these narratives gained prominence within the great powers (who serve as the chief securitising actors and audiences for macrosecuritization). The empirical record clearly leaves open the possibility of humanity macrosecuritization ‘winning out’, both by specific examples of (semi-)successful macrosecuritization (the ozone hole, nuclear winter, nuclear non-proliferation, biological weapons), and because of times where initially promising humanity macrosecuritization occurred, although it eventually failed (initial ideas around international controls of atomic energy).

National securitisation and humanity securitisation narratives are normally competitive and contrasting to each other. This is because the two modes of securitisation require different referent objects, and therefore they are always competing for who’s interest ought to take priority. Often, these interests diverge. This may be because what has typically been considered the best modes of protection for threats to national security are not those security practices that best defend against existential threats to humanity. Typically, this involves a concern around balance of power, military force and defence. These seem very different from the strategies of mutual restraint and sensible risk assessment needed to combat the risk of AGI (although, for example, the risk of nuclear war may help provide motivation for this). Thus, national securitisation shuts down most of the best options available to us (a moratorium, a single international project, even possibly responsible scaling etc), whilst delivering very little useful in return. It makes a quest for supremacy, rather than a quest for safety, the upmost priority. The ability for open questioning and reflection is massively limited, something that may be essential if we are to have AGI that is beneficial.

Definitionally, macrosecuritization failure is, according to Sears (2023) “the process whereby an actor with a reasonable claim to speak with legitimacy on an issue frames it as an existential threat to humanity and offers hope for survival by taking extraordinary action, but fails to catalyze a response by states that is sufficient to reduce or neutralize the danger and ensure the security of humankind.” Thus, if national securitization narratives winning out leads to macrosecuritization failure as Sears (2023) seems to show, definitionally it is dangerous for our ability to deal with the threat. Macrosecuritization success provides the basis for states to engage in modes of protection that are appropriate to combat existential threats to humanity, namely logics of mutual restraint.

It is also important to show there exist options that lie beyond securitisation, although national securitization reduces the possibility of these being effective. Much greater discussion is needed as to whether humanity securitisation shuts these options down or not, although generally the effect would likely be much weaker than national securitization. This is due to the differences in modes of protection, although the discussion of this is beyond the scope of this piece. Much of the existing work in the AI Safety community has focused on work that is ‘depolitical’, ‘normal politics’ or even ‘riskified’,  which the logic of securitisation stands in contrast to. The differences between these is not relevant to the argument, but securitised decision-making generally changes the sorts of decisions that can be made. Much of the work the AGI Safety community has done, from technical standards and evals, to international reports and various legal obligations, fall in these non-securitised categories. Most technologies are governed in a depoliticised or politicised way - their governance does not gain presidence over other issues, is not considered essential to survival, is not primarily interacted with by the security establishment, and is considered open for debate by people with contrasting values as the normal rules of politics expert decisionmaking are still ‘in play’. Simple solutions, that often centralise power, focused on emergency measures to quickly end the existential threat, are put in contrast to the slower paced, more prosaic approach based on a plurality of values and end states and balanced interests.  For most technologies we can carry out normal cost-benefit trade offs, rather than singularly focusing on (national) survival. This is why most technologies don't lead to extraordinary measures like an international pause or a ‘Manhattan Project’. Without any securitization, a lot of the important ideas in AGI Safety, like RSPs, could be carried out, whilst something like ‘the Project' probably couldn't be. National securitization would threaten these safety measures, as they would be a distraction to the percieved need to protect the states national security by ensuring supremacy by accelerating AI. This has often been pointed out in discussions of a ‘race’, but even without one existing ‘in reality’, once AGI supremacy is seen as essential to state survival, a ‘race’ will be on even if there is no real competitor. The possibility of slowing down later seems to run in contrast to how issues normally securitised actually function. Thus, unless there is a perception that the AGI itself is the threat (which lends itself to more humanity macrosecuritization narratives), national securitisation will lead to acceleration and threaten the viability of the most promising strategies to reduce risks from AGI. Betting on national securitisation, therefore, seems like a very dangerous bet. I should note, macrosecuritisation seems to me to be, if successful, probably safer in the long term than these alternative forms of decisionmaking. More discussion of securitisations/other logics and how these intersect with existing actions and theories of victory may be useful, but here I just wanted to point out how the priority that securitisation endangers means it directly may reduce the probability other actions can be successful.

Section 3 - How does this relate to Aschenbrenner’s ‘Situational Awareness’?

Aschenbrenner pursues an aggressively national securitising narrative. His article mentions ‘national security’ 31 times; it mentions ‘humanity’ 6 times. Even within those 6 mentions, he fails to convincingly construct humanity as the referent object of security. The closest he gets is when he says in the conclusion “It will be our duty to the free world…and all of humanity”. A more closed off, exclusionary, national macrosecuritisation (“the free world”) is even in that phrase given priority over “humanity”, which is added on as an afterthought.

Throughout the article Aschenbrenner makes a much stronger attempt to construct the referent object of security as the United States. For example, Aschenbrenner states “superintelligence is a matter of national security, and the United States must win”, which is as unambiguous statement of national securitisation as you could construct. Similarly, his so-called “AGI Realism” has three components “Superintelligence is a matter of national security” “America must lead” and “We must not screw it up”. Only the last of these gives any reference to a humanity securitisation narrative; the first two are utterly focused on the national security of the United States.

Aschenbrenner also constructs a threatening ‘Other’ that poses an existential threat to the referent object; China. This is in contrast to the more typical construction for those attempting to construct a humanity securitisation, who pose that superintelligence is itself the threatening ‘Other’. Of the 7 uses of the term ‘existential’ in the text, only 1 is unambiguously referring to the existential risk that is posed to humanity by AGI. 3 refer to the ‘existential race’ with China, clearly indicative of seeing China as the existential threat. This is even more so when Aschenbrenner states “The single scenario that most keeps me up at night is if China, or another adversary, is able to steal the automated-AI-researcher-model-weights on the cusp of the intelligence explosion”. This highlights exactly where Aschenbrenner sees the threat coming from, and the prominence he gives it. The existential threat is not constructed as the intelligence explosion itself; it is simply “China, or another adversary”.

It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future. It may, therefore, be framed as a form of “inclusive universalism”, which are ideological beliefs that seek to improve the world for everyone, like liberalism, communism, Christianity, Islam. However, this doesn’t often concern itself with the survival of humanity, and fails to genuinely ‘macrosecuritise humanity’, rather in practice looking very similar to national securitisation. Indeed, some of the key examples of this, such as the Cold War, highlight how easy it is to overlap and look identical to national securitisation. So, whilst Aschenbrenner may not be chiefly concerned with the nation, his ideology will cash out in practice as this, and indeed is those who it is their chief concern that he hopes to influence.

Section 4 - Why Aschenbrenner's narrative is dangerous and the role of expert communities

It is clear then that Aschenbrenner pursues the exact same narratives that Sears argues leads us to macrosecuritization failure, and therefore a failure to adequately deal with existential threats. But my claim is further; not just is Aschenbrenner wrong to support a national securitisation narrative, but that ‘Situational Awareness’ is a dangerous piece of writing. Is this viewpoint justifiable? After all, Aschenbrenner is essentially a 23 year old with an investment firm. However, I think such confidence he doesn’t matter would be misplaced, and I think that his intellectual/political project could be, at least to an extent, impactful.  Aschenbrenner chose to take the dangerous path with little track record of positive outcomes (national securitisation), over the harder, and by no means guaranteed, but safer pathway with at least some track record of success (humanity macrosecuritisation) - the impacts of this could be profound if it gains more momentum.

Firstly, Aschenbrenner’s project is to influence the form of securitisation, so either you think its important (and therefore dangerous), or Aschenbrenner’s work is irrelevant. I do think, given the historical securitisation of AI as a technology for national supremacy (Sears, 2023), promoting the national securitisation of superintelligence may be easier than promoting the humanity securitisation. So it may be, in the words of one (anonymous) scholar who I spoke to about that, that “He had a choice between making a large impact and making a positive one. He chose to make a large impact.”

Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered, have played a significant role in contesting securitisations in the past. In the cases of climate change, nuclear winter and the ozone hole, for example, macrosecuritisation by expert communities has been to various degrees successful. Moreover, these communities were significant actors in the attempted macrosecuritisation of atomic energy in the 1940s, although were utterly outmanoeuvred by those who favoured national securitisation. It is notable that Aschenbrenner compares his peer group to “Szilard and Oppenheimer and Teller,” all who had significant influence over the existence of the bomb in the first place. In the cases of Szilard and Oppenheimer, they ultimately failed in their later attempts to ensure safety, and were racked by guilt. Teller is an even more interesting example for Aschenbrenner to look up to; he fervently favoured national securitisation, and wanted to develop nuclear bombs that was considered so destructive that Congress blocked his proposals. Moreover, during the debates around ‘Star Wars’, he played a clear role in getting narratives about Soviet capabilities accepted in the US national security and political apparatus that were untrue - a failure of ‘situational awareness’ driven by his own hawkishness that drove escalation and (likely) decreased existential safety (Oreskes and Conway, 2011). Perhaps Teller is an analogy that may be instructive to Aschenbrenner. It’s clear that Aschenbrenner sees the potential influence of expert communities on securitisation, but he strangely decides he wishes to be in the company of the men who’s actions around securitisation arguably ushered in the ‘time of perils’ we see ourselves in.

We have more reason to think the AI Safety community could be very impactful with regards to securitisation. Those affiliated with the community already hold positions of influence, from Jason Matheny as the CEO of Rand, to Paul Christiano at the US AISI, to members of the UK civil service and those writing for prominent media outlets. These are the potential vehicles of securitisation, and therefore it does seem genuinely plausible that humanity securitisation (or indeed, national securitisation) narratives could be successfully propagated by the AI Safety community to (at least some) major or great powers. Moreover, compared to many previous examples, the AGI Safety community seems uniquely well financed and focused compared to many of these other issues, particularly compared to the others. The nuclear experts were predominantly Physicists, who only engaged in their humanity securitisation activism after the end of the Manhattan Project, both once they had lost much of their influence and access. Others were doing it alongside their physics work, which meant the time they had was more limited, compared to the vastly better resourced opposition. Climate scientists, such as the IPCC, have also been important securitising actors as well, although their success has also been mixed. Climate scientists were, especially in the early days, vastly outspent by the ‘merchants of doubt’ (Oreskes and Conway, 2011) who acted to try and desecuritise climate change. Whilst this risk could very much exist in the case of AGI, and early indications have suggested many companies will try, I am sceptical that macrosecuritisation will be as difficult as in the climate case. Firstly, there are many features of climate change that make ‘securitisation’ especially hard (Corry, 2012)  (the long time horizons, the lack of a simple ‘way out’, the lack of immediate measures that can be taken, the need for long term, structural transition), that don’t seem to apply to AGI governance (especially if a moratorium strategy is taken). Furthermore, given the narratives that many leading AGI companies have tried to take already (i.e. very publicly acknowledging the possibility of an existential risk from their products to humanity) (Statement on AI risk, 2023), it is harder for them to desecuritise AI than it was for the oil industry to do so (although they did discover climate change, their support for it was less strong than the statements put out by the AGI companies). Finally, the impacts of extraordinary measures to limit AGI, such as a moratorium on development, impact a much smaller number of people than any extraordinary measures needed in the case of climate change.

So it seems that Aschenbrenner, whilst maybe having a greater chance of success by supporting ‘national securitisation’, has taken an action that could have plausibly very dangerous consequences. He also turned down the opportunity to have a (probably smaller) positive impact by embracing humanity securitisation. However, it is important to suggest that most of Aschenbrenner’s impact will depend on how his ideas are legitimised, supported or opposed by the AI Safety community. The role of the communities as a whole have tended to be more significant than the role of particular individuals (Sears 2023), and outside the AI Safety community its not clear how seriously Aschenbrenner is taken; for example, an Economist article about ‘situational awareness’ failed to even mention his name. Thus, the type of macrosecuritisation (or if there is any at all), is far from out of our hands yet, but it is an issue we must take seriously, which hopefully future work will explore.

The shutting of the issue, the ‘Project’ as Aschenbrenner calls it, entirely behind closed doors within the national security establishment, as national securitisation would do, makes the ability to achieve a good future much harder. In other examples, such as the case of the control of atomic energy, the initial push of the issue into the national security establishment meant that those scientists who wanted safety on the issue got sidelined (memorably seen in the film Oppenheimer). If we nationally securitise AGI, we risk losing the ability for many protective measures to be taken, and risk losing the influence of safety considerations of AI. If discussions around AGI  become about national survival, the chances we all lose massively increase. The public, in many ways, seems to take the risks from AGI more seriously than governments have, and so taking strategy ‘behind closed doors’ seems dangerous. We too quickly close down the available options to us, increasing the power of those who pose the largest danger to us at present (i.e. the AGI companies and developers), and reduce the ability for them to be held to account. This doesn’t suggest that some measures (eg restricting proliferation of model weights, better cyber-security) aren’t useful, but these could easily be carried out as part of ‘normal politics’, or even ‘depoliticised decision making’ rather than as part of ‘nationally securitised’ decision making.

One may claim with enough of a lead, the USA would have time to take these other options (as Aschenbrenner does). However, taking these other options may be very hard if AI is considered a matter of national security, where even with a lead, logics of survival and supremacy will dominate. As seen with the ‘missile gap’ during the Cold War, or the Manhattan Project after the failure of the Nazi bomb project, it is very easy for the national security establishment to perceive itself as being in a race when it isn’t in fact (Belfield and Ruhl, 2022). So in order for the advantages of a healthy lead to be reaped, de-(national)securitization would then need to happen; but for the healthy lead to happen through the ‘Project’, significant national securitisation is needed in the first instance. Moreover, If AI supremacy or at least parity is considered as essential for survival, a truly multilateral international project (like the MAGIC proposal), seems infeasible. States, having established AGI as a race, would lack the trust to collaborate with each other. The failure of the Baruch plan, for example, for (partially) these exact reasons, provides good evidence that national securitisation cannot be the basis for existential safety through collaboration, which eliminates many theories of victory as feasible. Humanity securitisation leaves all of these options open.

Section 5- The possibility of a moratorium, military conflict and collaboration

I cannot discuss every point, but there are a number of aspects that are core to Aschenbrenner’s thesis that national securitisation is the way that are worth rebutting.

Firstly, he seems to not even consider the option of pausing or slowing AI development. In ‘Situational Awareness’ this is dismissed with simply “they are clearly not the way”. He also uses his national securitisation as an argument against pausing (“this is why we cannot simply pause”), but then uses his so-called AGI realism, which is generated because pausing is “not the way” to support his national securitisation. Those who wish to argue comprehensively for such a dangerous strategy as Aschenbrenner does (ie proceeding with building a technology where we can’t pause, and build it quickly) must at least provide substantive justification for why pausing, a moratoria or a single international project aren’t possible. Aschenbrenner entirely fails to do this.

In fact, by his own model of how the future plays out, pausing may be easier than some assume. Given the very high costs involved in making AGI, it seems likely a very small number of actors can carry it out, and likely heavy national securitisation of AGI is required - this is the point of ‘Situational Awareness’. If we avoid such extreme national securitisation a moratorium may be much easier, and this wouldn’t even require strong ‘humanity macrosecuritision’ of the issue. If he is one or two orders of magnitude out with the costs of AGI, it only becomes possible with an extremely large government funded project. If getting there would be so very costly and the only way to get there is to successfully securitise AGI such that the Project takes priority over other political and economic considerations. Therefore, one may think without the strong securitisation that Aschenbrenner proposes, AGI timelines just are much longer; he essentially wants to burn up much of the existing timeline by securitising it. Moreover, without the successful national securitisation, the huge costs of AGI may make pausing seem a much lower cost than many have imagined it to be, and therefore make pausing much more plausible; all states have to do is forego a very large cost, and a danger, that they may not have wanted to invest in in the first place.  

Secondly, Aschenbrenner seems to under-appreciate the potential risks of military conflict from national securitisation of AGI, and how this impacts the possibility of collaboration. Aschenbrenner argues that superintelligence would give a ‘decisive strategic advantage’. More importantly, Aschenbrenner seems to suggest that this is ‘decisive even against nuclear deterrents’. If multiple nuclear powers appreciate the gravity of the situation, which Aschenbrenner suggests is exactly what will happen - certainly he thinks China will - then the potential for military, and even nuclear, conflict in the months or years leading up to the intelligence explosions massively increases. The lack of great power conflict post-WW2 has been maintained, at least in part, due to nuclear deterrence; if one’s adversary were seen as able to break deterrence, a preemptive strike, either using conventional or nuclear capabilities, may be seen as justified or necessary in order to prevent this. For such a military conflict to be able to be ruled out, one would suspect that the USA would have to be able to break deterrence before one of its adversaries knew the USA was anywhere near doing this. Given the very large costs and infrastructure usage involved in ‘the Project’, this to me seems unlikely unless China had no ‘situational awareness’. However, if China have no ‘situational awareness, many of Aschenbrenner’s other arguments about the necessity of racing are not viable.  According to many forms of realism, which Aschenbrenner’s arguments seem to be (crudely) based on, the chances of a first strike to prevent deterrence being broken massively increases. A war also seems to be the default outcome in the game ‘Intelligence Rising’, largely due to similar dynamics.

This also suggests to me that Aschenbrenner underestimates the possibility of collaboration. States have an interest in not breaking deterrence, due to this potential consequences, and once the danger becomes clear, collaboration seems more plausible. If states see the development of AGI as a race that can never be won because of other states responses, and see it as solely a tool for destabilisation, and thus not something that it is possible to gain advantage over. The pressures driving development would, therefore, be reduced, and this may be possible even if the dangers of rogue superintelligence was not widespread. The fact that this technology that breaks the military balance of power does not yet exist may also make negotiations easier - for example, one of the key reasons the Baruch plan failed was the fact that the US had the bomb, and the Soviets were not willing to support giving up building the bomb until the US gave up the bomb and the balance of power was restored. Given superintelligence does not yet exist, and neither side can be sure they’d win a race, it may be in both sides best interest to forego development to maintain a balance of power, and thus peace. This would also suggest that, as long as both sides surveillance of the other were good enough, they may be able to reasonably ensure against a secret ‘Project’, allowing for a more durable agreement ensured by an implicit threat of force. Notably, these ideas seem to roughly follow from my interpretation of Aschenbrenner's (underexplained) model of international politics. 

Finally, understanding security constellations may show how durable shifts away from competitive dynamics and towards a enduring moratorium may be possible. In studies of regional securitisations under higher level securitisations, such as during the Cold War, it became clear how the most powerful macrosecuritisations can impose a hierarchy on  the lower level securitisations that compose it. These rivalries were often limited due to the imposition of the macrosecuritisation over the top of it. If the threat from AGI becomes clear to governments - they gain ‘Situational Awareness’ - then a macrosecuritisation that structures national rivalries under it seem at least possible, allowing for a moratorium and collaboration. However, this requires the macrosecuritisation to be of a shared threat, and strong enough to overcome the lower-level rivalries (such as the original proposals for international nuclear control), rather than a shared construction of the other states as an existential threat (like during the Cold War). Whilst the rebuilding of the international order to protect against the threat of nuclear weapons never did occur, it certainly wasn’t impossible - Aschenbrenner, despite accepting states will see AGI as such a big deal they will divert significant percentages of their GDP, never considers that this is possible for AGI.

One objection here maybe the time we have is simply too short for this. Under Aschenbrenner’s timelines, I have some sympathy for this objection. However, it should be noted that the formulation and negotiations of the Baruch plan only took a year. Moreover, an initial, more temporary pause/slow down itself would buy time for this to exactly happen. Burning up any safety cushion we have by national securitisation reduces the chances of this coming about.

Conclusion

My analysis has been far from comprehensive and it is not a full defence of how plausible humanity macrosecuritization is, nor is it a full defence of slowing.

Nonetheless, I have argued a number of points. Aschenbrenner pursues an aggressively national securitising narrative, undermining humanity macrosecuritization. This fulfils the very criteria that Sears (2023) finds is the most conducive for macrosecuritization failure, and a failure to combat existential threats effectively. Moreover, Aschenbrenner’s narrative, if it gains acceptance, makes existing efforts to combat XRisk much less likely to succeed as well. Thus, Aschenbrenner shuts down the options available to us to combat AGI XRisk, whilst offering a narrative that is likely to make the problem worse.

Aschenbrenner fails to consider the fact that the narrative of national securitisation is far from inevitable, but is rather shaped by the great powers, and actors - including expert communities - who communicate to them, their publics and political and security establishments. Without national securitisation ‘the Project’ seems unlikely to happen, so Aschenbrenner seems to be actively agitating for the Project to actually happen. This means that Aschenbrenner, far from taking on a purely descriptive project, is helping dangerous scenarios come about. Indeed, there seems to be some (implicit) awareness of this in the piece - the reference class Aschenbrenner uses for himself and his peers is “Szilard and Oppenheimer and Teller”, men who are, at least to a degree, responsible for the ‘time of perils’ we are in today.

Aschenbrenner hugely fails to consider alternatives, and the consequences of a nationally securitized race. He fails to consider adequately the possibility of a moratorium and why it wouldn’t work, nor how it could esure long-term safety. He fails to consider how the risk of superintelligence breaking nuclear deterrence could increase the chances of military conflict if both sides nationally securitise their ‘Projects’. He also fails to see how the possibility of this happening might increase the chances of collaboration, if states don’t see the development of AGI as inevitable.

As a community, we need to stay focused on ensuring existential safety for all of humanity. Extreme hawkishness on national security has a very poor track record of increasing existential safety.

Aschenbrenner, L. (2024) SITUATIONAL AWARENESS: The decade ahead, SITUATIONAL AWARENESS - The Decade Ahead. SITUATIONAL AWARENESSS. Available at: https://situational-awareness.ai/ (Accessed: 12 July 2024).

Belfield, H. and Ruhl, C. (2022) Why policy makers should beware claims of new ‘arms races’, Bulletin of the Atomic Scientists. Available at: https://thebulletin.org/2022/07/why-policy-makers-should-beware-claims-of-new-arms-races/ (Accessed: 12 July 2024).

Buzan, B. and Wæver, O. (2009) ‘Macrosecuritisation and security constellations: reconsidering scale in securitisation theory’, Kokusaigaku revyu = Obirin review of international studies, 35(2), pp. 253–276.

Buzan, B., Wæver, O. and de Wilde, J. (1998) Security: A New Framework for Analysis. Lynne Rienner Publishers.

Corry, O. (2012) ‘Securitisation and “riskification”: Second-order security and the politics of climate change’, Millennium Journal of International Studies, 40(2), pp. 235–258.

Oreskes, N. and Conway, E.M. (2011) ‘Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming’, 128, pp. 355–435–436.

Sears, N.A. (2023) Great Power Rivalry and Macrosecuritization Failure: Why States Fail to ‘Securitize’ Existential Threats to Humanity. Edited by S. Bernstein. PhD. University of Toronto.

Statement on AI risk (2023) Center for AI Safety. Available at: https://www.safe.ai/work/statement-on-ai-risk (Accessed: 12 July 2024).

New Comment
17 comments, sorted by Click to highlight new comments since:

(crossposted to EA forum)

I agree with much of Leopold's empirical claims, timelines, and analysis. I'm acting on it myself in my planning as something like a mainline scenario. 

Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:

  • a small circle of the smartest people believe this
  • i will give you a view into this small elite group who are the only who are situationally aware
  • the inner circle longed tsmc way before you
  • if you believe me; you can get 100x richer -- there's still alpha, you can still be early
  • This geopolitical outcome is "inevitable" (sic!)
  • in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
  • Etc.

Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.

As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like "epistemically cancelled": fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would've plausibly been uninvited as a speaker to any EAGs in 2017.

I don't particularly want to debate whether these epistemic boundaries were correct --- I'd just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would've played out, I'd be curious to hear.

(crossposted to the EA Forum)

Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.

These are not just vibes - they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It's not epistemically poor to say these things if they're actually true.

It's not epistemically poor to say these things if they're actually true.

Invalid. 

Compare: 

A: "So I had some questions about your finances, it seems your trading desk and exchange operate sort of closely together? There were some things that confused me..."

B: "our team is 20 insanely smart engineers" 

A: "right, but i had a concern that i thought perhaps ---"

B: "if you join us and succeed you'll be a multi millionaire"  

A: "...okay, but what if there's a sudden downturn ---" 

B: "bull market is inevitable right now"

 

Maybe not false. But epistemically poor form. 

(crossposted to the EA Forum)

(😭 there has to be a better way of doing this, lol)

[-]kave20

(I don't understand your usage of "sic" here. My guess from the first was that you meant it to mean "he really said this obviously wrong thing", but that doesn't quite make sense with the second one).

I mean that in both cases he used literally those words.

[-]Ruby120

Sic is short for the Latin phrase sic erat scriptum, which means thus it was written. As this suggests, people use sic to show that a quote has been reproduced exactly from the source – including any spelling and grammatical errors and non-standard spellings.


I was only familiar with sic to mean "error in original" (I assume kave also), but this alternative use makes sense too.

FWIW I was also confused by this usage of sic, bc I've only ever seen it as indicating the error was in the original quote. Quotes seem sufficient to indicate you're quoting the original piece. I use single quotes when I'm not quoting a specific person, but introducing a hypothetical perspective.  

tbf I never realized "sic" was mostly meant to point out errors, specifically. I thought it was used to mean "this might sound extreme --- but I am in fact quoting literally"

I would broadly support a norm of ‘double quotation marks means you’re quoting someone and single quotes means you are not’.

The sole reason I don’t do this already is because often I have an abbreviated word, like I did with ‘you’re’ above, and I feel like it’s visually confusing to have an apostrophe inside of the pair of single quotes.

Maybe it’s worth just working with it anyway? Or perhaps people have a solution I haven’t thought of? Or perhaps I should start using backticks?

For my taste, the apostrophe in "you're" is not confusing because quotations can usually only end on word boundaries.

I think (though not confidently) that any attempt to introduce specific semantics to double vs. single quotes is doomed, though. Such conventions probably won't reach enough adoption that you'll be able to depend on people adhering to or understanding them.

(My convention is that double quotes and single quotes mean the same thing, and you should generally make separately clear if you're not literally quoting someone. I mostly only use single quotes for nesting inside double quotes, although the thing I said above about quote marks only occurring on word boundaries make this a redundant clarification.)

[-]Phib70

(Cross comment from EAF)
Thank you for making the effort to write this post. 

Reading Situational Awareness, I updated pretty hardcore into national security as the probable most successful future path, and now find myself a little chastened by your piece, haha [and just went around looking at other responses too, but yours was first and I think it's the most lit/evidence-based]. I think I bought into the "Other" argument for China and authoritarianism, and the ideal scenario of being ahead in a short timeline world so that you don't have to even concern yourself with difficult coordination, or even war, if it happens fast enough. 

I appreciated learning about macrosecuritization and Sears' thesis, if I'm a good scholar I should also look into Sears' historical case studies of national securitization being inferior to macrosecuritization. 

Other notes for me from your article included: Leopold's pretty bad handwaviness around pausing as simply, "not the way", his unwillingness to engage with alternative paths, the danger (and his benefit) of his narrative dominating, and national security actually being more at risk in the scenario where someone is threatening to escape mutually assured destruction. I appreciated the note that safety researchers were pushed out of/disincentivized in the Manhattan Project early and later disempowered further, and that a national security program would probably perpetuate itself even with a lead.

 

FWIW I think Leopold also comes to the table with a different background and set of assumptions, and I'm confused about this but charitably: I think he does genuinely believe China is the bigger threat versus the intelligence explosion, I don't think he intentionally frames the Other as China to diminish macrosecuritization in the face of AI risk. See next note for more, but yes, again, I agree his piece doesn't have good epistemics when it comes to exploring alternatives, like a pause, and he seems to be doing his darnedest narratively to say the path he describes is The Way (even capitalizing words like this), but...

One additional aspect of Leopold's beliefs that I don't believe is present in your current version of this piece, is that Leopold makes a pretty explicit claim that alignment is solvable and furthermore believes that it could be solved in a matter of months, from p. 101 of Situational Awareness:

Moreover, even if the US squeaks out ahead in the end, the difference between a 1-2 year and 1-2 month lead will really matter for navigating the perils of superintelligence. A 1-2 year lead means at least a reasonable margin to get safety right, and to navigate the extremely volatile period around the intelligence explosion and post-superintelligence.77 [NOTE] 77 E.g., space to take an extra 6 months during the intelligence explosion for alignment research to make sure superintelligence doesn’t go awry, time to stabilize the situation after the invention of some novel WMDs by directing these systems to focus on defensive applications, or simply time for human decision-makers to make the right decisions given an extraordinarily rapid pace of technological change with the advent of superintelligence.

I think this is genuinely a crux he has with the 'doomers', and to a lesser extent the AI safety community in general. He seems highly confident that AI risk is solvable (and will benefit from gov coordination), contingent on there being enough of a lead (which requires us to go faster to produce that lead) and good security (again, increase the lead).

Finally, I'm sympathetic to Leopold writing about the government as better than corporations to be in charge here (and I think the current rate of AI scaling makes this at some point likely (hit proto-natsec level capability before x-risk capability, maybe this plays out on the model gen release schedule)) and his emphasis on security itself seems pretty robustly good (I can thank him for introducing me to the idea of North Korea walking away with AGI weights). Also just the writing is pretty excellent.

Thank you @GideonF for taking the time to post this! This deserved to be said and you said it well. 

Excellent work.

To summarize one central argument in briefest form:

Aschenbrenner's conclusion in Situational Awareness is wrong in overstating the claim.

He claims that treating AGI as a national security issue is the obvious and inevitable conclusion for those that understand the enormous potential of AGI development in the next few years. But Aschenbrenner doesn't adequately consider the possibility of treating AGI primarily as a threat to humanity instead of a threat to the nation or to a political ideal (the free world). If we considered it primarily a threat to humanity, we might be able to cooperate with China and other actors to safeguard humanity.

I think this argument is straightforwardly true. Aschenbrenner does not adequately consider alternative strategies, and thus his claim of the conclusion being the inevitable consensus is false.

But the opposite isn't an inevitable conclusion, either.

I currently think Aschenbrenner is more likely correct about the best course of action. But I am highly uncertain. I have thought hard about this issue for many hours both before and after Aschenbrenner's piece sparked some public discussion. But my analysis, and the public debate thus far, are very far from conclusive on this complex issue.

This question deserves much more thought. It has a strong claim to being the second most pressing issue in the world at this moment, just behind technical AGI alignment.

Some bits of this felt a bit too much like an appeal to authority to me.

I am also wary about attempts that try to abstract the situation too much. In many ways,  AI is an unusual technology. Maybe it's my own reading comprehension, but I came away from this without really understanding why you think human securitisation is feasible for AI.

Thus, if national securitization narratives winning out leads to macrosecuritization failure as Sears (2023) seems to show, definitionally it is dangerous for our ability to deal with the threat.

Definitionally? You can't prove facts about the world by definition.

For what it's worth, I'm increasingly coming over to Leopold's side. There seems to be far too much political opposition to any sensible action and I don't know how we can overcome this without conducting more outreach to the national security folks. One reason I've shifted more this direction is that Trump is looking likely to win the nomination and JD Vance is his running mate. JD Vance seems to think that AI safety is all about wokenes. The national security folks might have enough pull with the Republicans to make a difference here.

So I think I disagree with an appeal to authority. My constant reliance on Sears (2023) is not because Sears is an authority, but because I think its a good bit of work. I've tried summarising it here, but the reason I don't lay out the entire argument is it is based on 3 detailed case studies and a number of less detailed case studies; if I'd tried to lay them out here, the post would have been far too long. I hope people read the underlying literature that I base my argument on - I buy them because the arguments are compelling, not because of authority. 

I think looking for analogies through history, and what strategies have led to success and failure, is a very useful, albeit limited, approach. Aschenbrenner also seems to as well. I don't fully argue why I think we could get humanity securitisation, but my various arguments can be summarised with the two following points:

  • National securitisation, whilst often the 'default' is by no means inevitable. Humanity securitisation can win out, and also it is possible AGI is never truly securitised (both of these are probably safer paths)
  • The reason national securitisation wins out is due to it 'winning' in a struggle of narratives. Expert epistemic communities, like the AGI Safety community, can play a role in this, as could other forms of political work as well. 

 

The 'Definitionaly' point is sort of poor writing from me. The definition of macrosecuritisation failure includes within it 'failure to combat the existential threat'. So if I can prove it leads to macrosecuritisation failure, that means our ability to deal with the threat is reduced; if our ability to deal with the threat is not reduced, then it would not be macrosecuritisation failure. So the point one would contest is whether the national securitisation winning out causes macrosecuritisation failure, as that incorporates the dangerous outcomes in its definition - however, I do agree, I worded this poorly.  I actually do think this definition is quite slippery, and I can think of scenarios that macrosecuritisation fails but you still get effective action, but this is somewhat besides the point. 

I am also somewhat pessimistic about a Trump II administration for macrosecuritisation and pausing. But this doesn't mean that I think Aschenbrenner's viewpoint is the 'best of the rest' - its amongst the worst. As I meant in the piece, national securitisation has some real significant dangers that have played out many times, and would undermine the non-securitised governance efforts so far, so its not clear to me why we ought to support it if macrosecuritisation won't work. Aschenbrenner's model of AI governance seems more dangerous than these other strategies, and there are other things Republicans care about beyond national security, so its not obvious to me why this is where they should go. The track record of national security has been (as shown) very poor, so I don't know why pessimism around macrosecuritisation should make you endorse it. 

The definition of macrosecuritisation failure includes within it 'failure to combat the existential threat'.


This seems like a poorly chosen definition that's simply going to confuse any discussion of the issue.

The track record of national security has been (as shown) very poor, so I don't know why pessimism around macrosecuritisation should make you endorse it.

If neither macrosecuritisation or a pause a likely to occur, what's the alternative if not Aschenbrenner?

(To clarify, I'm suggesting outreach to the national security folks, not necessarily an AI Manhattan project, but I'm expecting the former to more or less inevitably lead to the latter).