The framework
Here, I will briefly introduce what I hope is a fundamental and potentially comprehensive set of questions that an AGI safety research agenda would need to answer correctly in order to be successful. In other words, I am claiming that a research agenda that neglects these questions would probably not actually be viable for the goal of AGI safety work arrived at in the previous post: to minimize the risk of AGI-induced existential threat.
I have tried to make this set of questions hierarchical, by which I simply mean that particular questions make sense to ask—and attempt to answer—before other questions; that there is something like a natural progression to building an AGI safety research agenda. As such, each question in this framework basically accepts the (hypothesized) answer from the previous question as input. Here are the questions:
- What is the predicted architecture of the learning algorithm(s) used by AGI?
- What are the most likely bad outcomes of this learning architecture?
- What are the control proposals for minimizing these bad outcomes?
- What are the implementation proposals for these control proposals?
- What is the predicted timeline for the development of AGI?
Some immediate notes and qualifications:
- As stated above, note that each question Q directly builds from whatever one’s hypothesized answer is to Q-1. This is why I am calling this question-framework hierarchical.
- Question 5 is not strictly hierarchical in this sense like questions 1-4. I consider one’s hypothesized AGI development timeline to serve as an important ‘hyperparameter’ that calibrates the search strategies that researchers adopt to answer questions 1-4.
- I do not intend to rigidly claim that it is impossible to say anything useful about bad outcomes, for example, without first knowing an AGI’s learning algorithm architecture. In fact, most of the outcomes I will actually discuss in this sequence will be architecture-independent (I discuss them for that very reason). I do claim, however, that it is probably impossible to exhaustively mitigate bad outcomes without knowing the AGI’s learning algorithm architecture. Surely, the devil will be at least partly in the details.
- I also do not intend to claim that AGI must consist entirely of learning algorithms (as opposed to learning algorithms being just one component of AGI). Rather, I claim that what makes the AGI safety control problem hard is that the AGI will presumably build many of its own internal algorithms through whatever learning architecture is instantiated. If there are other static or ‘hardcoded’ algorithms present in the AGI, these probably will not meaningfully contribute to what makes the control problem hard (largely because we will know about them in advance).
- If we interpret the aforementioned goal of AGI safety research (minimize existential risk) as narrowly as possible, then we should consider “bad outcomes” in question 2 to be shorthand for “any outcomes that increase the likelihood of existential risk.” However, it seems totally conceivable that some researchers might wish to expand the scope of “bad outcomes” such that existential risk avoidance is still prioritized, but clearly-suboptimal-but-non-existential risks are still worth figuring out how to avoid.
- Control proposals ≠ implementation proposals. I will be using the former to refer to things like imitative amplification, safety via debate, etc., while I’m using the latter to refer to the distinct problem of getting the people who build AGI to actually adopt these control proposals (i.e., to implement them).
Prescriptive vs. descriptive interpretations
I have tailored the order of the proposed question progression to be both logically necessary and methodologically useful. Because of this, I think that this framework can be read in two different ways: first, with its intended purpose in mind—to sharpen how the goals of AGI safety research constrain the space of plausible research frameworks from which technical work can subsequently emerge (i.e., it can be read as prescriptive). A second way of thinking about this framework, however, is as a kind of low-resolution prediction about what the holistic progression of AGI safety research will ultimately end up looking like (i.e., it can be read as descriptive). Because each step in the question-hierarchy is logically predicated on the previous step, I believe this framework could serve as a plausible end-to-end story for how AGI safety research will move all the way from its current preparadigmatic state to achieving its goal of successfully implementing control proposals that mitigate AGI-induced existential risks. From this prediction-oriented perspective, then, these questions might also be thought of as the relevant anticipated ‘checkpoints’ for actualizing the goal of AGI safety research.
Let’s now consider each of the five questions in turn. Because they build upon themselves, it makes sense to begin with the first question and work down the list.
I mean, I don't actually need to defend the assertion all that much. Your core claim is that these questions are necessary, and therefore the burden is on you to argue not only that zooming in on something narrower might not just add noise, but that zooming in on something narrower will not just add noise. If it's possible that we could get to a point where AGI is no longer a serious threat without needing to answer the question, then the question is not necessary.
Also, regarding the Afghan hound example, I'd guess (without having read anything about the subject) that training Afghan hounds does not actually involve qualitatively different methods than training other dogs, they just need more of the same training and/or perform less well with the same level of training. Not that that's particularly central. The more important part is that I do not need to be confident that "different possible AGIs could not follow this same pattern"; you've taken upon yourself the burden of arguing that different possible AGIs must follow this pattern, otherwise question 1 might not be necessary.
That is basically what I mean, yes. I strongly recommend the Yudkowsky piece.
Remember that if you want to argue necessity of the question, then it's not enough for these inputs to be relevant to the outcome of AGI, you need to argue that the question must be answered in order for AGI to go well. Just because some factors are relevant to the outcome does not mean that we must know those factors in advance in order to robustly achieve a good outcome.
Remember that if you want to argue necessity of the question, it is not enough for you to think that the probabilities fluctuate; you need a positive argument that the probabilities must fluctuate across the spectrum, by enough that the question must be addressed.
I think most of the strategies in MIRI's general cluster do not depend on most of these questions.