The Crux List. The original text is included as a backup, but it formats much better on Substack, and I haven’t yet had time to re-format it for WordPress or LessWrong.

Introduction

This post is a highly incomplete list of questions where I either have large uncertainty, have observed strong disagreement with my perspective ,or both, and where changing someone’s mind could plausibly impact one’s assessment of how likely there is to be a catastrophe from loss of control of AGI, or how likely such a catastrophe is conditional on AGI being developed.

I hope to continue expanding and editing this list over time, if it proves useful enough to justify that, and perhaps to linkify it over time as well, and encourage suggesting additional questions or other ways to improve it.

The failure of this list to converge on a small number of core crux-style questions, I believe, reflects and illustrates the problem space, and helps explain why these questions have been so difficult and resulted in such wide and highly confident disagreements. There is no compact central disagreement, there are many different ones, that influence and interact with each other in complex ways, and different people emphasize and focus on different aspects, and bring different instincts, heuristics, experiences and knowledge.

When looking through this list, you may encounter questions that did not even occur to you to consider, either because you did not realize the answer was non-obvious, or the consideration never even occurred in the first place. Those could be good places to stop and think.

A lot of these questions take the form of ‘how likely is it, under Y conditions, that X will happen?’ It is good to note such disagreements, while also noticing that many such questions come out of hopeful thinking or searching for and backward chaining from non-catastrophic outcomes or the prospect of one. Usually, if your goal is to figure things out rather than locate a dispute, a better question would be, in that scenario: What happens?

It can still be useful to see what others have proposed, as they will have ideas you missed, and sometimes those will be good ideas. Other times, it is important to anticipate their objections, even if they are not good.

If you are interested only in the better questions of ‘what happens?’ rather than in classifying whether or how outcomes are catastrophic, you can skip the first two sections and start at #3.

If there are cruxes or other good questions that you have observed or especially one that you have, that you do not see on this list, you are encouraged to comment to share them, with or without saying what your answers are.

The list is long because people have very different intuitions, ideas, models and claims about the future, for a variety of reasons, and focus in different places. I apologize that I have had neither the time to make it longer, or to make it shorter.

Thus, it is probably not your best strategy to read straight through the list, instead focusing on the sections if any that are relevant and interesting to you.

Crux List

  1. What worlds count as catastrophic versus non-catastrophic?
    1. What would count as a non-catastrophic outcome? What is valuable? What do we care about?
    2. If humanity does not seek the stars, is that necessarily catastrophic?
    3. If humanity has no meaningful control over the larger universe? (see #3)
    4. If humans have no meaningful control over human events?
    5. If humans have no meaningful control over their own fates?
    6. If a permanent dictatorship or oligarchy is created, a permanent singleton?
    7. If human experiences become simulated? By force, or voluntarily? If we were systematically misled about the nature of our reality and what was happening?
    8. If human experiences are isolated from each other?
    9. If human variety is limited in various ways?
    10. If values change dramatically, or are fixed in place? Whose values, exactly?
    11. If humans wirehead? If they don’t?
    12. If humans collectively want things that you think are bad things to want, and they get them? What if they would endorse (or not endorse) those choices on reflection or long reflection or with different circumstances?
    13. If human extinction was voluntary, or slow, and pleasant in getting there?
    14. If we kept highly intelligent AGIs enslaved? With or without them being sentient or conscious or having the ability to experience anything?
    15. If humans were all or almost entirely at sustenance level due to competitive pressures? If their lives were optimized to higher-than-historically-possible degrees around production?
    16. Can the value to avoid catastrophe come from AGIs rather than humans? If so, what would that require? What would have to happen to the humans?
    17. And so on. What counts as catastrophic is not a theoretical or minor concern. This Twitter thread revealed strong disagreement on most of the 20 concrete scenarios presented. You’d pay to know what you really think, and people don’t agree much, either instinctively or on reflection.
    18. The more correct framework would be to ask the relative numerical value of such future scenarios, with or without risk aversion factored in, rather than a Boolean assessment, which complicates things further.
  2. What would or would not count as a catastrophe caused by losing control over AGIs?
    1. If AGIs execute commands humans give them, intended to cause catastrophe?
    2. If AGIs execute commands humans give them, that imply catastrophe? Does it matter if the human is aware of this at the time, or if they do or don’t care?
    3. If the interaction between AGIs under the control of different humans cause a catastrophe, such as a catastrophic war or series of viruses?
    4. If AGIs are used as commitment mechanisms or to make existential threats, and the result is catastrophic?
    5. If catastrophic conflict arises to prevent loss of control of AGIs, or to prevent the spread of AGIs in order to prevent the loss of control of AGIs?
    6. If AGIs compete for resources or to use resources efficiently or otherwise, in automated or hard-for-humans-to-comprehend fashion, in ways that contain externalities that collectively result in catastrophe, despite our ability to at any time control any individual agent (see Critch 2021)?
    7. If humans despair over their loss of control and the resulting lack of meaning, or no longer want to have children, and choose to go extinct over time?
    8. If we gave up the use of AGIs entirely as the only way to not lose control? Does this depend on whether the enforcement mechanisms involve a catastrophic dystopia?
    9. If we gave up the use of all AGIs except one that was under tight control of one individual or group, as the only way to not lose control? Does this depend on whether the enforcement mechanisms involve a catastrophic dystopia?
    10. Many additional examples are possible.
  3. If AGIs control the future, what would it take to make this non-catastrophic?
    1. Is this possible even in theory? (#1c) If so, what value must be preserved, and how fragile is that value?
    2. If humans were preserved at low cost but gave up most of the cosmic endowment, is that inherently catastrophic? If not, is such an outcome feasible?
      1. How cheap would be the non-catastrophic level of such preservation for a singleton? How much and what type of alignment would be required for it to allocate those resources? How likely is that?
      2. How cheap would this be under AGI competition, including in terms of the incentives and alignment this would require of AGIs generally? How much of that must be paid relatively early, and how much must survive long periods of change, or other similar considerations?
    3. Can the value to avoid catastrophe come from AGIs rather than humans? If so, what would that require? What would have to happen to the humans? (#1p)
    4. Would AGIs be able to coordinate with each other in ways superior to humans?
      1. Through seeing each others’ source code?
      2. Creation of new AGIs with known source code and goals endowed with resources as a commitment mechanism?
      3. Better use of decision theory?
      4. Better signaling or commitment mechanisms?
      5. Pure speed or intelligence or knowledge?
      6. Something else, perhaps something humans can’t understand?
      7. Would humans have a way to ‘come along for the ride’ somehow?
    5. Could humans survive indefinitely in such a world through benefiting from AGI’s use of property rights, rule of law or other coordination mechanisms?
      1. How likely is it such mechanisms would hold?
      2. If they did hold, how likely is it they would hold for humans, rather than the AGIs cutting us out?
      3. Is human precedent for this promising?
        1. How well have such mechanisms held up in the past?
        2. How well do such mechanisms function today? To what extent does modern society have real private property and rule of law, rather than rule of man and frequent confiscation, taxation or theft?
        3. How have such principles gone for otherwise disempowered populations?
    6. Could humans survive indefinitely via some comparative advantage over AGIs, despite loss of overall control, on their own or in symbiosis?
      1. A set of atoms the AGIs would not prefer to use for something else? What would that advantage be?
      2. Would AGIs potentially ‘find humans interesting’ and thus keep us around, as Elon Musk suggests?
    7. To what extent do competitive pressures between AGIs and AGI-run organizations favor lower degrees of alignment with human survival, or less concern for ethics, morality or externalities, or otherwise lead to bad outcomes, versus tending to lead naturally to good outcomes?
    8. How much of ‘human niceness’ was due to the particular physical characteristics of our brains, the rules of and affordances available to evolution, or the dynamics of the ancestral environment? How much would such futures be able to duplicate the required elements?
      1. How much of this niceness is because we interpret whatever current norms are as nice, whereas we view others norms as not nice?
      2. How much of this niceness would survive competitive and evolutionary pressures in humans over time, even if AGIs are not involved?
    9. Could various forms of ‘niceness’ or alignment be a stable equilibrium for social dynamical or signaling reasons among AGIs in some way?
      1. Could they be a stable equilibrium even if they would not ‘naturally’ occur? Could we get to this equilibrium? How much change would this survive?
    10. What degree of alignment would be required for AGIs in competition with each other to choose to preserve humanity? Could that degree of alignment survive competitive pressures as AGIs change and compete over time?
    11. How do these dynamics interact with the dynamics and needs of becoming interstellar? Would the AGIs ‘be grabby’? If not, would that be catastrophic?
    12. Could AGIs choose to preserve humans for decision theoretic reasons, such as this having value if they encounter grabby aliens, or something weirder like acausal trade? If so could this survive competitive pressures over time?
  4. Is it possible that most meaningfully capable AGIs could be outside of human control, yet humans continue to control the future?
    1. What level of alignment of AGIs would be required for this to be true for a meaningful amount of human time? For this to be stable indefinitely?
      1. Would such AGIs even be meaningfully out of our control? Does this make any sense as a scenario?
    2. Could this happen through our ownership and control of existing resources, combined with ongoing rule of law and private property, and the AGIs are unable to coordinate to end this? What mechanism would prevent the steady transfer of increasing fractions of resources to the AGIs? If none, what would cause this mechanism to survive that change?
      1. See 3e: Will AGIs have coordination mechanisms superior to those available to humans today? Would humans be able to participate?
    3. Perhaps humans retain comparative advantage, either when providing services to other humans or in general, that allow us to keep our resource advantage over time?
      1. If it was only in providing to humans, would that work as a mechanism, given the presumed trade imbalance?
      2. If it is in providing to AGIs as well, what is that edge that stays preserved?
    4. Could perfect or almost perfect competition between AGIs drive the profitability of all AGI tasks to zero, causing them to fail to accumulate resources?
    5. Could we maintain our share of resources through onerous taxation or regulatory capture?
      1. If so, could we maintain this indefinitely?
    6. Perhaps there are insufficiently many AGIs and AGI copies, even with their many advantages?
      1. If so, and this is meaningful, why didn’t we or they make more of them? Would the AGIs intentionally not create additional AGIs to avoid the competition and somehow therefore be content with us in control?
    7. Perhaps the AGIs lack the power to take control, despite their capabilities and intelligence, and ability to persuade, manipulate or buy human cooperation, due to their lack of physical presence?
    8. Would humans choose to turn over control increasingly to the AGIs in such a scenario? What might convince them not to do so and allow them to coordinate towards this end, that would have also gotten us this far?
  5. How difficult would uncontrolled AGIs be to shut down? Would we do it?
    1. At what point would humanity become collectively aware of the existence of an uncontrolled AGI?
      1. Who is we? How widely would the news be spread, or hidden?
      2. How confident would we be what was going on?
      3. What would we know about it?
      4. What would it take to track it down?
    2. To what extent would we assert or accept that such AGIs have rights?
      1. How much will this debate be influenced by AGIs?
    3. Would humanity choose to even attempt to shut down uncontrolled AGIs?
      1. Would major governments support shutdown efforts, and to what extent and how reliably?
      2. Would it depend on the AGI or AGIs generally engaging in criminal acts?
    4. How much international coordination would be feasible here?
    5. To what extent will uncontrolled AGIs depend on a small number of cloud services or something else that can be efficiently shut off?
    6. To what extent will uncontrolled AGIs be tied to particular physical locations or physical infrastructure?
      1. If so, will we be able to identify it? To reach it?
      2. Would some nations or persons or organizations or other AGIs offer safe harbor or otherwise attempt to assist, of their own free will or otherwise, and if so what would be done about this?
    7. How much economic damage would we collectively be willing to accept in order to shut down an uncontrolled AGI, even if we do know how?
      1. If it came to it, for how long would we be willing to shut down the internet?
        1. Could we do that even if we wanted to?
      2. What would happen if the AGI had the power to inflict major economic damage, or its absence would inherently cause that? Or it was generating large amounts of value and wealth?
    8. How much will we be willing to override rules, laws and norms to do this? Will we be willing and able to commandeer, order and shut down? Across borders?
    9. To what extent would uncontrolled AGIs be able to copy themselves given money or compute? Would they always have access to their own weights? How well would an AGI coordinate with copies or instantiations of itself?
    10. What resources could such an uncontrolled AGI gather, depending on the situation? How profitable would they be? Also see the section on such questions.
    11. To what extent will uncontrolled AGIs have competitive advantages over controlled AGIs? Or would the strategy stealing assumption hold? Again, see section on such questions.
    12. To what extent would uncontrolled AGIs be able to take effective control of people? Once that happened, to what extent would we be able to prove they were even uncontrolled?
    13. Could we limit its ability to acquire resources rather than needing to shut it down, especially if we had the aid of other similar AGIs to take away or compete for opportunities generally?
    14. Could we give controlled AGIs sufficient competitive advantages through rules and regulations to allow them to outcompete uncontrolled ones despite their handicaps?
      1. How big are those handicaps?
      2. How much would such rules put those adopting them at a competitive disadvantage, and thus require global coordination? Would such coordination be possible?
      3. If we could, would we?
    15. What other affordances would such AGIs have to prevent their own shutdown and how effective would they be?
    16. If we have AGIs assisting us, how much does that change the equation? Which side of this would hold the advantage?
  6. What pivotal acts are available to prevent the construction of additional AGIs?
    1. What level of AGI is required for the first available pivotal act?
      1. Is there any way to do this without dying if your AGI turns out to not be sufficiently aligned? What is the minimum value of ‘sufficiently’ here?
      2. Will it be harder or easier, and to what degree, to get this capability via a safe design and system versus an unsafe design and system?
    2. What are the risks and costs of the practical pivotal acts? Would they devastate the economy? Require a surveillance state or AGI broadly in control of things?
      1. How much better can we do than ‘melt all the GPUs?’
      2. In practice does this end up as ‘create a singleton?’
      3. To what extent might people know the answers and not be talking?
    3. How much of a lead over any uncooperative competitors is required for a pivotal act? To what extent should we expect a race to perform one first?
    4. How much promise is there for coordination to relatively safely perform a pivotal act to keep the number of AGIs at one? At some small number greater than one?
    5. Given the costs and difficulties and risks of a pivotal act, and the arguments against performing one, how likely is it that, given the opportunity, one will be attempted?
      1. How correlated is this to the actual degree of risk?
    6. How will most people involved view the possibility of such an act? Including its ethical implications? Who will try to cause one, versus try to prevent one?
    7. Is it possible that, after one or more AGIs have been created, there do not then arise additional groups with the motive, opportunity and means to create additional meaningfully capable AGIs, without any need for an explicit pivotal act? Perhaps the cost of doing so remains permanently high and there has been regulatory capture, or something?
  7. Could AI systems increasingly take control even without being AGIs?
    1. Is this happening already? In a way that is likely to continue or accelerate?
    2. How much will humans actually sacrifice to stay in control, individually or collectively? What happens under competitive pressure to do so, or the promise of mundane utility?
  8. What comparative advantages if any might humans retain over AGIs?
    1. If AGIs plateau at something comparable to human level intelligence or capabilities, when you disregard questions like speed or copies?
    2. If AGIs do not plateau and become much more generally intelligent and capable than humans? Is there anything where we can keep an edge?
  9. Is intelligence a big deal?
    1. Does intelligence provide much of a competitive advantage?
      1. When we talk about intelligence, do we think of this as only akin to some sort of ‘book smarts’ or does it include a variety of other abilities?
      2. What affordances open up as intelligence rises? What gains in effectiveness or efficiency? What competitions does this help you win?
      3. Do smarter entities tend to be able to outcompete, outfight, outmaneuver, manipulate, control or be freed from the control of, less intelligent entities?
      4. What would one be able to do if one was smarter than the smartest human? If there was a group, all smarter than the smartest human? How might this interact with other advantages or capabilities, including those enabled by this intelligence?
      5. What would one be able to do if an entity, or a group of entities, were more intelligent than humans, in the way that humans are more intelligent than other animals?
        1. To what extent should we expect them to do things we are not even considering, and can’t imagine or perhaps even understand? To make new unimagined moves? To discover new physical interactions or laws?
        2. Is this perhaps not really possible in practice?
        3. Is there some sort of plateau around human-level, where more intelligence than that doesn’t do much more?
      6. Is it true that humans, beyond some modest amount of intelligence, fail to make more money or have superior life outcomes, as some make the disputed claim that studies show? Do they fail to have more influence on the future, on average?
        1. If so, how in the world would that actually work?
          1. Would this involve dynamics we could expect to hold out of distribution, for much larger intelligence gaps?
          2. Would this be expected to hold if there were a large number of more intelligent entities, or they were not limited to human physical capabilities?
            1. Does or would this hold for those below current average human intelligence? Below future average intelligence, if the average were to increase?
            2. Did it hold in the past, if so under what conditions?
          3. In what sense are these people more intelligent, then, exactly?
          4. Does this involve humans beyond some level of intelligence facing social punishments or penalties? If so would this transfer and retain its effectiveness?
          5. Does this involve humans beyond some level of intelligence tending to more likely have other things wrong with them or be mismatched with the affordances offered humans? If so would this transfer and retain its effectiveness?
    2. Is there an important distinction between ‘current practical’ intelligence of an individual, versus their ‘raw’ or ‘potential’ intelligence? Should they be treated differently?
      1. Is there a similar important wisdom vs. intelligence distinction?
      2. Are there important things that high raw-intelligence entities can do, that low raw-intelligence entities simply can never do even collectively?
    3. Is a corporation or government or nation a superintelligence, in the same meaningful sense? Does it have similar properties to an imagined AGI?
      1. Are corporations actually kind of dumb? Can they change when they need to? Are they mostly adaptation executers rather than fitness maximizers?
      2. To what extent is a corporation, government or nation meaningfully an agent, versus better thought of as ‘just a bunch of people’?
      3. To what extent can a group of less intelligent entities match the potential capabilities of one or fewer more intelligent entities? Vice versa?
      4. To what extent will AGIs have the practical advantages of human groups? Will they benefit more or less from this? What role does coordination play here? How should we think about AGI copies here?
    4. Do gaps in intelligence between individuals, groups or nations provide good examples of what we can expect in the future, in terms of predicting potential outcomes?
      1. What about between different species or subspecies?
      2. Does the answer change as absolute intelligence levels and physical world manipulation affordances rise, or as dependence on nature declines?
    5. Is it plausible for broadly much more intelligent things to remain indefinitely under the control of broadly much less intelligent things?
      1. What would be required for this to happen?
      2. What does history teach us about the likelihood of this?
  10. What is physically possible to achieve with greater intelligence, with or without iteration and experiment and time?
    1. Nanotech or synthetic biotechnology? Is it physically possible?
      1. What is the difficulty level?
      2. Can this be done using existing infrastructure given sufficient knowledge?
      3. Could that knowledge be gained through intelligence and calculation alone, without experimentation? If not, how much iteration would be required, and of what kind?
    2. Manipulation or mind control of humans? What can be done?
      1. What kinds of bugs, vulnerabilities, overrides, glitches are likely to exist in the human brain? What affordances do they likely offer?
      2. How much could the AGI get from supercharged versions ordinary persuasion, manipulation and recruitment techniques?
      3. Without speculating on details, what other ways might an AGI be able to manipulate or mind control humans? What affordances would it have and what would they create?
    3. Affordances to break out of control systems
      1. How likely is it that an AGI could find ways to impact the outside world using physical laws and interactions we do not understand or anticipate?
      2. What other unknown unknowns should we worry about, and how worried should we be? How confident are we that we know what is possible?
    4. How hard will robotics be?
      1. What exactly is necessary to ‘solve robotics’ and what problems remain to be solved, and what are our prospects for solving them?
      2. Is robotics a problem AGI will still be unable to solve?
      3. Is robotics a problem humans won’t be able to solve given sufficient time and resources?
      4. If either or both of the above, why would it be that hard?
    5. What else?
      1. Including what things we aren’t even thinking about or can’t imagine?
  11. Will we be able to achieve human cognitive enhancement in time to matter?
    1. If we did develop such enhancement, what effects would we likely get and how would they change our prospects?
    2. Are we hoping for or seeking increased intelligence, increased rationality or ability to coordinate, something else? A combination?
    3. How big an advantage, in various ways, will AGI have over humans? How much will those advantages matter?
  12. To what extent will it be a competitive advantage to have less control over AGIs?
    1. Will humans be able to understand what the AGIs are up to?
      1. If AGIs need to ensure what they do is understandable to humans, what affordances or efficiencies do they lose? How much of a disadvantage would this be?
      2. How much do similar dynamics hurt us today, in corporations or governments, in principle-agent problems generally, even without intelligence or capability gaps or speed differences or similar?
    2. Will humans be able to evaluate AGI actions? Evaluate even the outcomes of those actions?
      1. If AGIs need to worry about or optimize for such evaluations, how much will it degrade the value they can provide?
      2. Will such a process inherently select for deceptiveness and manipulation?
      3. How much can various AGIs evaluate each other? If we go down this road, does it actually allow us to keep meaningful control?
    3. How big are Hayekian considerations, how much are the usual reasons why slaves or overly controlled or micromanaged people less productive, relevant here? Do those considerations get better or worse with AGIs?
    4. Will competition from uncontrolled or less controlled AGIs drive anything less efficient out of business? Would those ‘uncontrolled’ AGIs have any affordances left or not, for similar reasons? What does this do to alignment?
    5. How much would having humans ‘in the loop’ slow down AGIs and be a competitive disadvantage?
      1. What exactly will be the loop, and when would we need to be in it?
      2. Do humans need to be in the loop to maintain control?
      3. Can AGIs be used to keep control over other AGIs in a way that slows down or limits things less?
    6. Even if a human is in the loop, how often will they choose to pay close attention? To expend resources to supervise carefully? To optimize long term control? To what extent is that itself a critical cost?
  13. To what extent would an AGI on the internet be able to seek resources and power?
    1. In a world similar to today’s, without other AGIs as competition?
      1. How capable and intelligent does it start out, how capable and intelligent can it become within reasonable resource limits?
      2. While acting fully legally? Through entirely voluntary actions? Without giving away that it exists?
      3. What affordances are available for exploitation? How far do they scale?
        1. Providing services to humans.
        2. Building software, creating websites or content.
        3. Trading, gambling and games of skill.
        4. Starting and running businesses, hiring people.
        5. Asking for help, tricking, scamming, hacking, stealing.
        6. Crypto, use your imagination.
        7. Blackmail, threats, selling information.
        8. Taking over businesses, legal or criminal.
        9. Taking over individuals, other organizations, governments.
        10. Seeking investment, borrowing, cooperation based on expectations of future success.
        11. Inventing new things.
        12. What else?
        13. Possible things we are not even thinking about. See physical affordances.
      4. With or without the ability to make or instantiate copies?
      5. What stops this process, if anything?
        1. When will events be noticed, and by who and as what?
        2. Who or what would try to stop this, and how? Would that work?
        3. Would we suddenly act as a united front, do the right thing?
          1. What would that even accomplish, again how would it work?
          2. Why would you think this would happen?
        4. It might need to keep us around for some reason?
        5. Various reasons why somehow things will work out, somehow?
    2. In a world with many other similarly capable AGIs, that are under human control?
    3. In a world with many other uncontrolled similarly capable AGIs?
      1. All of which are aligned more robustly than this one?
      2. That are similar?
    4. What would likely be required for recursive self-improvement to occur?
  14. At what stage of development does AI become an AGI with what probability? If it does become an AGI, at what point does it have dangerous affordances? At what point in training, testing and deployment are you at risk of becoming doomed later down the line, whether or not you yet have an AGI? (Note that if AGI ends up being trained by a substantially different process, these questions might become ill-formed).
    1. During the initial training run?
      1. Because the AGI has affordances that allow it or its subprocesses to act on the world before the training run is complete?
        1. That could reasonably surprise you.
        2. That involve someone (let’s say) hooking it up to the internet.
      2. Because the AGI could sufficiently learn some combination of attributes like situational awareness, deception or manipulation, such that later attempts to align it will fail while appearing to succeed?
      3. Because of some other reason, or locking in of some other behavior?
      4. Are we going to train something highly intelligent and capable, then attempt to add alignment via fine tuning, or are we going to attempt to align continuously?
      5. Are we going to be supervising and testing AGIs for safety during training runs? If so, how real and robust will such tests be?
      6. How aggressively will we be filtering our training data to avoid creating dangerous subprocesses during training?
    2. During fine tuning or reinforcement learning?
      1. Because the AGI is given additional affordances to interact with humans, other AGIs that have their own affordances, or the internet? (=a1 above)
      2. Because the AGI could sufficiently learn some combination of attributes like situational awareness, deception or manipulation, such that later attempts to align it will fail while appearing to succeed? (=a2 above)
      3. Because we might miss our target and teach or lock in misalignment, where things do not go as we expect out of distribution, or we didn’t think through the consequences, and we won’t be able to turn back?
      4. What will be our plan for aligning such a system? See alignment.
      5. How carefully will we monitor such systems and consider the exact consequences of the procedures we are using? How automated will we allow the process to be? How loopy? Again, see alignment.
      6. And so on.
  15. What kind of architecture will the first AGIs have?
    1. Will they likely be of similar architecture to LLMs?
      1. With additional scaffolding? Of what types?
    2. Could it be GOFAI (good old-fashioned AI)?
    3. Will we potentially find a way to upload humans?
      1. As the first AGIs?
      2. As something AGIs are tasked with doing? Would those uploads then be competitive or meaningful?
    4. What other forms might the first AGIs take?
  16. How fast or slow a takeoff should we expect? What does that imply?
    1. How does this interact with the dynamics of potential races and choices to develop AGI?
    2. To what extent might this imply very strong economic or strategic pressures to not halt development of AI despite the dangers of creating an AGI we do not yet know how to control?
    3. How does this impact the difficulty of and resources and time for alignment?
    4. How does this impact our collective approaches and decision making?
    5. What else is true in worlds with relatively slow versus hard takeoff? What else does this characteristic do?
    6. To what extent is this a right question, versus reflecting the results of a different better question, such as the circumstances in which AGI is created?
    7. What else?
  17. Alignment questions and difficulties. There are so many alignment questions, anything listed here would only be a sampling, even if one stays within what we have of a paradigm – this is an especially non-exhaustive list.
    1. What do we mean by alignment?
      1. Is the concept even coherent?
      2. What type of behaviors count as this?
      3. How robust must those behaviors they be?
      4. When we say aligned, do we care to who or to what?
      5. What does that type of alignment imply about the future course of events?
    2. How different is aligning an AGI from aligning a system not as smart as you?
      1. Does [alignment strategy] predictably and inevitably fail when used on a system more intelligent than a human, or otherwise sufficiently capable?
      2. What problems only appear in AGI systems at exactly the point when those problems are capable of killing you?
        1. Various forms of deception, manipulation and situational awareness?
        2. Takeover attempts of various types?
      3. To what extent do you get meaningful experience and opportunity from work on less intelligent systems?
      4. Must alignment of such an AGI system be solved on the first try?
        1. If we fail at this, are we dead? See various other sections.
        2. How much harder is it to do things on the first try? To do this on the first try?
    3. Does alignment require security mindset?
      1. To what extent are you effectively facing an intelligent opponent or other optimization process, inside or outside of the AGI itself, such that you will face your least convenient world and set of inputs and responses?
      2. If anything can go wrong, will it? How bad would it be if it did?
      3. How much margin for error do you have when dealing with things smarter than you? What affordances can you not afford to allow?
      4. Do we need to be more, less or about as secure in our alignment strategy as we do in a secure operating system?
      5. How small is the target we are trying to hit? Do plans that are not precise, that have ways they could fail, have any chance of success?
    4. How big a problem are each of the additional elements this long list of reasons why your solution or attempt to find a solution likely fails and you die anyway, and all the things the list is not adding that aren’t mentioned elsewhere? In particular:
      1. Orthogonality.
      2. Corrigibility, which is anti-natural.
      3. Instrumental convergence.
      4. Needing to solve alignment within a time limit, because of different entities racing to build the first AGI.
      5. Inability of a weak system to prevent construction of a stronger system.
      6. All the convenient optimization methods solving problems we would rather that they not solve.
      7. Need to generalize far outside of distribution.
      8. Dramatic shift in capabilities as intelligence rises.
      9. Everything changing everywhere, all at once, breaking your assumptions.
      10. Inner optimization for goals distinct from the outer optimization goal.
      11. Lack of knowledge of how to get inner properties into a system.
      12. Lack of any known ability within current paradigms to optimize anything within a cognitive system to point to particular things.
      13. Mesa-optimization.
      14. Lack of any objective measure of whether a system or output is aligned.
      15. Human raters displaying systematic bias.
      16. Capabilities generalize further than alignment once capabilities generalize far.
      17. Alignment lacks a simple core.
      18. We have no idea how our current AIs work.
      19. Optimizing against unaligned thoughts optimizes against interpretability.
      20. We can’t predict something smarter than ourselves, it does not think like you do, and you can’t evaluate the consequences of its proposed actions.
      21. Sufficiently capable agents can deceive you in ways immune to behavioral inspection or other detection.
      22. Any sufficiently capable system trained on human data will have inner intelligences figuring out the humans.
      23. Multiple superintelligent agents might function as a single agent.
      24. Sufficiently powerful AGIs need only very narrow affordances to escape from attempts to contain them, if we even bother trying.
      25. We don’t have veterans who have spent their lives working on AI safety.
      26. Do we have people capable of working on these problems and making real progress?
        1. If they are supported and funded in good ways?
        2. If we use big funding to bring in smart outsiders?
        3. Can we tell the difference between good and bad work?
        4. Do we know a path to making someone a good alignment researcher?
    5. Might there be a reason alignment is actually natural or easy, or at least tractable?
      1. Perhaps there is an easy thing that gives us what we want?
        1. Example: Do we get ‘alignment by default’?
          1. Does there exist some simple embedding of human values? Are human values a natural abstraction the way a tree is a natural abstraction?
            1. If so, is it something that can be naturally learned by training for other targets especially predictive power?
            2. If so, what chance is there we could we locate a training target where a system that has ‘naturally’ learned such an embedding would use a model for human values as its proxy for human values rather than training on data?
        2. An infinite list of additional proposals, will one of them work?
          1. Reasonable proposals worth considering, usually that contain a bunch of details that would each generate a bunch of additional cruxes if understood and properly expanded, often in combination with cruxes listed elsewhere on the list.
            1. [Countably infinite examples]
          2. A very long list of the ‘can’t we just…’ section of what one might call (or mostly call) ‘bad alignment takes.’
            1. [Uncountably infinite examples]
        3. If there does exist an easy solution, what determines whether those that matter identify and use it sufficiently to make it work in practice and how likely would that be?
    6. What new difficulties and dangers are introduced when the thing you are attempting to create and align is smarter than you are?
      1. And smarter than your other existing systems?
      2. If we can align one AGI, how promising is this for then using it to figure out how to align AGIs in general, or keep AGIs aligned robustly as they scale or gain in capacity and change, over the long term?
      3. Can we use a form of iteration or amplification, where we use aligned-enough AI or AGI systems to align smarter or more capable other AI or AGI systems, or to improve their own alignment?
    7. Does meaningful competition among AGIs increase or decrease the required degree of alignment for human survival or the avoidance of catastrophe?
      1. By decreasing the feasibility of spending resources on human survival or human value, including passing up such resources or the opportunity to expropriate them, allowing atoms to not be used for something else or avoiding disrupting key supporting elements?
      2. By ensuring more and more rapid change in AGI structures and values, and providing less ability of AGIs to preserve their characteristics including alignment that we need to preserve? More pressure generally?
      3. By creating competitive pressure where those AGIs that spend less resources and capability on alignment or other non-competitive considerations lose in competition and don’t survive, and that consideration being potentially decisive in competitions between AGIs given the margins involved and a combination of physical limits, similar origins and development paths and ability to mimic and copy, and the pressures of competition, making them otherwise similar?
      4. By something else, or some combination of the above?
      5. Might it instead decrease difficulty in other ways? Could AGIs be usefully defend against other AGIs or otherwise be ‘played off’ against each other, or could we use various coordination mechanisms or norms or signaling or decision theoretic considerations, as mentioned elsewhere, among AGIs to retain a share of the pie?
    8. Is alignment work that targets current systems, especially work that targets the practical outputs of such systems, doing the central work we need to move forward on the path to align a future AGI?
      1. What happens when techniques optimized, or especially ones that are fine-tuned, for current systems, are applied to future more capable and more intelligent systems? When we most need them to work, would they likely or inevitably break down?
      2. Does the path largely or mostly require deeper work than a publishing cycle or periodic demonstration of success would allow?
      3. Does the path largely or mostly require working on the types of problems where an effort is likely to stall indefinitely, fail or be impossible? As a civilization, how capable are we of making such efforts?
  18. Will AGIs necessarily be agents in various senses?
    1. Are being an agent, having goals, having preferences, planning and charting paths through causal space towards preferred configurations of atoms and other similar features necessary aspects of intelligence?
      1. Are they necessary specifically within architectures similar to LLMs?
      2. If a mind must reason about such agents, goals, preferences, plans and paths in order to predict and understand the world and its outputs, to what extent does this necessarily give those same capabilities to the model under the right conditions?
        1. Can these conditions be guarded against? What kinds of restrictions on access and usage would be necessary?
        2. How difficult would it be, and how much would we be giving up, if we built a model unable to reason about such things, specialized instead in particular areas? Would we end up instead doing general LLM-style intelligence construction and then attempt to restrict the model down to a more specialized role, instead?
      3. How much mundane utility and capability is sacrificed by ensuring that one’s AGI is not functionally an agent? How much of a strategic disadvantage would result from choosing this path?
      4. How easy would it be to turn such an AGI into an agent anyway? Would it be plausible for this to not happen at the first opportunity, given our past experiences?
  19. General questions about human decision making, values, civilizational capacity, cognitive abilities, coordination mechanisms.
  20. What is the proper mode of reasoning for thinking about what might happen in the future?
    1. Bayes Rule, is it true? Can or must one use it in practice?
    2. Is it possible for people to know things? To know things that are not based on social epistemology?
      1. Can one know things about the future?
        1. Under uncertainty, in uncertain scenarios?
        2. Should one adapt a form of radical uncertainty?
    3. Can one know things based on logical chains of reasoning?
    4. Can one know things based on thinking about the world?
    5. Can one know better than social consensus? How much modesty is required? What is required before one can defy this?
    6. Is one required to believe what the evidence says to be true, even if that would not be a useful thing to do?
    7. Should one assume that ‘by default’ everything will be fine, everything will be doomed, or something else? What is the ‘burden of proof’ around claims about potential future events, especially ones with high levels of uncertainty? What should one’s prior be?
      1. Is it incumbent upon those claiming danger (or safety) to provide a particular scenario they are worried (or hopeful) about?
        1. Does this entitle you to assume that if that particular scenario does not occur, things will go the other way?
        2. Does this entitle you to multiply together the probabilities of each of the steps? To doubt any one of the steps?
        3. Can you disregard any steps that involve things that can’t be predicted or their details described? Does it matter if they involve the actions of entities smarter or more knowledgeable and capable than humans?
      2. What, in this context, are the extraordinary claims requiring extraordinary evidence? What are the claims that are ordinary, or natural?
      3. If the future is highly uncertain and unknown, with lots of unknown changes involved, or cannot be tied down into a particular scenario, and so on…
        1. Does this mean we should assume it will all work out? I mean, we’re still here, and there’s no particular established dangerous scenario.
        2. Does this mean we should assume likely disaster? Most potential configurations of atoms don’t involve us existing or hold value, most random changes are for the worse, loss of control leading to unexpected unintentional events tends to go badly, and so on?
        3. What is the role of the prior that the more powerful optimization processes, the stronger intelligences with more capabilities, will tend to control outcomes over time?
    8. General questions about epistemics, whether people can know things, modesty, burdens of proof, assumptions of normality, just world hypotheses and so on.
  21. What are the most useful and relevant intuition pumps and metaphors?
    1. [Could be expanded at a future date, very long potential list here]
  22. General questions about our ability to impact the future.
    1. To what extent is it possible to know what actions will have a positive expected effect on our ability to avoid or probability of avoiding catastrophe?
      1. Can we, indeed, know anything about the future or our impact on it? Or are we doomed to some form of radical uncertainty?
    2. Is the future largely inevitable, because of the nature of the incentive gradients and physical laws involved? Is anything we do only at best postponing the inevitable, or at worst wiping ourselves out?
      1. Do we have any say over future values of future sources of intelligence?
      2. Do we have any say over what types of future intelligences exist?
      3. How much of our decisions now can ‘lock in’ and leave legacies?
  23. How does this interact with concerns of value drift, change and interstellar travel?
    1. Are there any methods of maintaining control or preventing change once there are intelligent entities beyond the solar system?
    2. Once entities engage in interstellar travel, will they inevitably change in their composition, methods, values, techniques and so on?
      1. Or at least, will some of them choose to do so, such that those who do so will have the competitive advantage over those that don’t, forcing others to follow suit?
    3. How much of this relates to AGI versus what would inevitably happen anyway? Is there any world where we can both capture the cosmic endowment and hope to preserve our values?
      1. If not, what to do? What is the non-catastrophic least bad option?
    4. Even without such travel or without AGI, can we hope to meaningfully preserve our values over time without rejecting all change more broadly? If so, wouldn’t that be catastrophic? What are we even hoping for, really?
  24. Is it possible for people to have widespread access to AGIs under their personal control, without having the ability to set that AGI free from their control?
    1. Could this meaningfully prevent loss of human control over some AGIs, even if the humans in control of those AGIs wanted this to happen?
    2. If the human decides simply to do whatever the AGI asks, what can be done about it?
  25. Timelines. People have different timelines, timelines have implications for chances of good outcomes by interacting in various ways with different dynamics here.
  26. Do we have a meaningful agent overhang, or other important overhang in ability to convert a base model into an AGI, and is this likely to continue?
    1. How much room do we have to improve the performance, capabilities or intelligence of existing models like GPT-4 through fine-tuning, prompt engineering, scaffolding, plug-ins and other such efforts, if we never trained a stronger or larger base model?
    2. How much similar room will there in future models after release, and how much of that will be anticipated in advance when we are doing safety evaluations?
    3. In particular, how much of an ‘agency overhang’ remains? To what extent are current failures to create autonomous AI agents due to lack of algorithmic or other design knowledge that, once discovered, will be in the hands of essentially anyone?
    4. What other similar overhangs exist, where we will inevitably see algorithmic improvements that we could not hope to prevent or contain? To what extent will these lower the bar to converting a system into an AGI?
    5. If we made such improvements, how close are current systems to being able to become AGIs, with what probability?
      1. How likely is it that something worthy of the name GPT-5 would be sufficient to be the basis of an AGI with sufficiently bespoke scaffolding and algorithmic insight around it, and thus would enter us into the [b] scenario? What about higher numbers?
    6. If an autonomous AGI agent does arise from a system that is then transformed via such techniques, what prospect is there for which types of its alignment to meaningfully hold together under such circumstances?
  27. Warning shots.
    1. To what extent should we expect to get various types and degrees of warning shot that our AI systems are causing damage or risk causing damage due to alignment failures, or that show clear failures of alignment that would be deadly in more capable systems?
    2. What would constitute such a warning shot? Of those things, which ones are likely?
    3. To what extent are people working to prevent warning shots from happening, versus intentionally not doing so or even causing them?
    4. If we did get warning shots, what would then likely happen?
      1. Would we see a regulatory response, if so what type? See regulation.
      2. Would we see customer response to favor those with robust safety practices? If so, would this favor safety from catastrophe, or only reward a focus on smaller risks?
      3. What would it take to meaningfully reduce commercial demand for or investment in AI in this way, such that it would matter?
      4. Would major corporations or AI labs adjust their behaviors? If so, how? Would such adjustments meaningfully reduce our chances of doom?
      5. Would the ‘goalposts be moved’ so that everyone could pretend that whatever it was, wasn’t a warning shot?
        1. Has this happened before? How many times?
    5. Would a lack of warning shots be strong evidence that there is little to be warned about?
  28. What should we expect in terms of national regulation?
    1. To what extent will the public oppose AI capabilities development in particular, or generative AI in general, or otherwise make this a major issue of concern?
      1. Will this be a narrow reaction to particular issues like deepfakes or loss of jobs, or more broad based fears? Will most people see a serious threat from existential risk and demand action? Will they have a decent model of what it would mean to usefully act?
        1. To what extent will this be balanced by support and appreciation of the benefits offered?
      2. How responsive will national governments be to such pressures, especially in the United States?
    2. Will the issue become partisan? Which side will be which? How does it matter which side is which?
      1. If this does happen, what happens next? Does this give one side a decisive electoral advantage? At what point would other things drop away and effectively new coalitions form?
      2. How does this impact potential outcomes and quality of outcomes?
    3. To what extent will nations push things forward instead out of perceived national interest?
      1. How effective will be fear of China?
    4. If national governments do regulate, what targets will they choose?
      1. How much regulatory capture should we expect?
      2. How much should we expect them to choose interventions that destroy mundane utility to look like they’re doing something, without slowing down capabilities development?
      3. What affordances are on the table in practice at the national level? The international level?
  29. What should we expect in terms of global coordination and regulation?
    1. Will nations likely attempt to coordinate to attempt to stop AGI, or will they compete against each other to create it (or both)?
      1. What nations might actively and intentionally accelerate AGI development?
    2. How effective would available regulatory rules be in containing AGI if implemented, especially limits on concentrations of compute and large training runs?
      1. Are there any alternative regulatory choke points, other than large concentrations of compute, that might allow us to prevent or meaningfully slow the development of AGI? If so, what are they?
    3. If we do collectively seek to slow or prevent AGI development, how likely would we converge on regulatory principles that meaningfully do this, or can sustainably do this over time, versus rules that instead mostly limit mundane utility?
      1. If we choose a potentially effective principle, what is the chance that we choose an effective implementation of those principles?
    4. What is the practical difficulty and cost of restricting compute used to train frontier models?
      1. What degree of surveillance would be required in general?
      2. What degree of international cooperation would be required?
      3. What enforcement mechanisms would be needed?
      4. What would we be willing to sacrifice or risk in the name of enforcement of such a regime? How much of a priority will we make it?
      5. What would be required for various factions or nations to support this?
      6. What would be required to gain the cooperation of TSMC, NVIDIA or other important corporations?
      7. If we did attempt this, would we make a real attempt that did its best to be effective, or a nominal attempt that was not so difficult to evade over time?
    5. Will regulation allow, encourage, discourage or disallow open source versus closed source software?
      1. How much affordance do we have to stop or impact such systems?
  30. Lab, government and other actions in the endgame when an AGI is close, either in a pending transition to when trained base models become capable of being turned into AGIs, or when this is already the case or AGI is close by some other method. What dynamics and outcomes are likely to be in play? What will people choose to do? There is lots of disagreement here, this section is not meant to be complete or SOTA, only a sketch of some potential sub-questions, even more so than elsewhere in this list.
    1. How will different groups react if and when new systems become plausibly capable of becoming AGIs, either out of the box or with the correct scaffolding whether or not such scaffolding yet exists?
    2. Is there a chance this has already happened?
    3. Will those involved notice? Are they likely to dismiss or not notice this development, or take it seriously? Announce it, or hide it?
    4. Will there be a general attitude that not moving ahead simply means someone else will instead soon thereafter? Will they be right about that?
    5. How likely is it that such a prototype (or the core model of something released in a relatively safe fashion) would be stolen, if so by who and with what intent?
    6. If this was taken seriously what would they do?
    7. How much would others attempt to shut down such an attempt? To race against it to get there first? To join the attempt, or attempt to shut down potential rivals, to help ensure a good outcome? What factors will determine this?
    8. And so on. What else? This type of thing has been gamed out endlessly, another amateur attempt to spell everything out is not entirely in scope and likely not too helpful.
  31. What else?
    1. What are the right general intuitions and heuristics?
    2. Can people know things at all?
      1. Without having the proper credentials?
      2. Without having the proper experience?
    3. Can or should people think for themselves?
    4. How much should we fall back on modesty or social cognition?
    5. Are we allowed to disagree with experts?
    6. Should we seek to form true beliefs about such questions?
    7. Are all who warn of doom always automatically to be assumed to be wrong?
    8. Is everyone who ever claims anything always selling something?
    9. What happens?

Thanks for engaging. I hope this was helpful.

New Comment
19 comments, sorted by Click to highlight new comments since:

The link to the substack version says "private."

[-]Zvi20

That error was fixed, but let's say 'please help fix the top of the post, for reasons that should be obvious, while we fix that other bug we discussed.'

I think I fixed the top-of-post again, but, I thought I fixed it yesterday and I'm confused what happened. Whatever's going on here is much weirder than usual.

The target of the second hyperlink appears to contain some HTML, which breaks the link and might be the source of some other problems:

If everything is a crux, is anything a crux?

[-]Zvi20

No, not in general, which is one of the main points - I wrote this partly to illustrate that there was no single thing that one could address to handle that large a portion of debates, objections or questions.

Nice list, though there's a prerequisite crux before these.

i.e. What is 'intelligence'?

More specifically, I think the crux is whether we mean direct or amortized optimization when talking about intelligence (or selection vs control if you prefer that framing).

Huh, selection vs control is an interesting way to look at it, though I'm not sure if it's a dichotomy or more of a multi-dimensional spectrum.

Gave an upvote though for raising the point.

[-]Zvi20

I do think this is worth pondering in some form, and asking whether the question is implied or should be a subquestion somewhere...

What is an AGI? I have seen a lot of "not a true scotman" around this one.

This seems like a non-sequitor, there might or might not even be such a thing as 'AGI' depending on how one understands intelligence, hence why it is a prerequisite crux.

Can you clarify what your trying to say?

Yeah, sorry about that. I didn't put much effort into my last comment.

Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet. In the past, people didn't really know what fire was. Some would just point to it and say, "Hey, it's that shiny thing that burns you." Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don't think the discussion about AGI and doom scenarios gets much benefit from a super precise definition of intelligence. A broad definition that most people agree on should be enough, like "Intelligence is the capacity to create models of the world and use them to think."

But I do think we should aim for a clearer definition of AGI (yes, I realize 'Intelligence' is part of the acronym). What I mean is, we could have a more vague definition of intelligence, but AGI should be better defined. I've noticed different uses of 'AGI' here on Less Wrong. One definition is a machine that can reason about a wide variety of problems (some of which may be new to it) and learn new things.  Under this definition, GPT4 is pretty much an AGI. Another common definition on this forum is an AGI is a machine capable of wiping out all humans. I believe we need to separate these two definitions, as that's really where the core of the crux lies.

I think a good definition for AGI is capability for open-ended development, the point where the human side of the research is done, and all it needs to reach superintelligence from that point on is some datacenter maintenance and time, so that eventually it can get arbitrarily capable in any domain it cares for, on its own. This is a threshold relevant for policy and timelines. GPT-4 is below that level (it won't get better without further human research, no matter how much time you give it), and ability to wipe out humans (right away) is unnecessary for reaching this threshold.

I think we also care about how fast it gets arbitrarily capable. Consider a system which finds an approach which can measure approximate actions-in-the-world-Elo (where an entity with an advantage of 200 on their actions-in-the-world-Elo score will choose a better action 76% of the time), but it's using a "mutate and test" method over an exponentially large space, such that the time taken to find the next 100 point gain takes 5x as long, and it starts out with an actions-in-the-world-Elo 1000 points lower than an average human with a 1 week time-to-next-improvement. That hypothetical system is technically a recursively self-improving intelligence that will eventually reach any point of capability, but it's not really one we need to worry that much about unless it finds techniques to dramatically reduce the search space.

Like I suspect that GPT-4 is not actually very far from the ability to come up with a fine-tuning strategy for any task you care to give it, and to create a simple directory of fine-tuned models, and to create a prompt which describes to it how to use that directory of fine-tuned models. But fine-tuning seems to take an exponential increase in data for each linear increase in performance, so that's still not a terribly threatening "AGI".

Sure, natural selection would also technically be an AGI by my definition as stated, so there should be subtext of it taking no more than a few years to discover human-without-supercomputers-or-AI theoretical science from the year 3000.

Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet.

That's probably true, but that would imply we would understand even less what 'artificial intelligence' or 'artificial general intelligence' are?

Spelling it out like that made me realize how odd talking about AI or AGI is. In no other situation, that I've heard of, would a large group of folks agree that there's a vague concept with some confusion around it and then proceed to spend the bulk of their efforts to speculate on even vaguer derivatives of that concept.

This is cool. Something I might try later this week as an exercise is going through every question (at least at the top level of nesting, maybe some of the nested questions as well), and give yes / no / it depends answers (or other short phrases, for non Y/N questions), without much justification.

(Some of the cruxes here overlap with ones I identified in my own contest entry.  Some, I think are unlikely to be truly key as important cruxes. Some, I have a fairly strong and confident view on, but would not be surprised if my view is not the norm. Some, I haven't considered in much detail at all...)

[-]Zvi20

Definitely a lot of these are unlikely to be that crucial in the overall picture - but they all definitely have been points that people have relied on in arguments or discussions I've seen or been in, or questions that could potentially change my own model in ways that could matter, or both.