This post arose out of my attempts to understand IDA and ways it could fail. It might help you do the same and could provide useful vocabulary for discussing desiderata for IDA.
We want IDA to satisfy progress---decomposition should make answering questions easier---and preservation---semantics should be retained across transformations. We need progress in each decomposition and, furthermore, repeated decompositions must be able to eventually simplify each question such that it can be answered directly by a human. Also, each decomposition and aggregation of questions and answers must introduce no more than a bounded amount of semantic drift and, furthermore, repeated decompositions and aggregations should also introduce no more than a bounded amount of semantic drift.
IDA
Iterated distillation and amplification (henceforth IDA) is a proposal for improving the capability of human-machine systems to suprahuman levels in complex domains where even evaluation of system outputs may be beyond unaugmented human capabilities. For a detailed explanation of the mechanics, I'll refer you to the original paper just linked, section 0 of Machine Learning Projects for Iterated Distillation and Amplification, or one of the many other explanations floating around the Web.
We can view IDA as dynamic programming with function approximation[1] instead of a tabular cache. Just like the cache in dynamic programming, the machine learning component of IDA is a performance optimization. We can excise it and look at just the divide-and-conquer aspect of IDA in our analysis. Then this simplified IDA roughly consists of: (1) repeatedly decomposing tasks into simpler subtasks; (2) eventually completing sufficiently simple subtasks; and (3) aggregating outputs from subtasks into an output which completes the original, undecomposed task. We'll examine this simplified model[2] in the rest of the post. (If you'd like a more concrete description of the divide-and-conquer component of IDA, there's a runnable Haskell demo here.)
Safety is progress plus preservation
For type systems, the slogan is "safety is progress plus preservation". Because we're using this only as a cute analogy and organizing framework, we'll not get into the details. But for type systems:
Progress: "A well-typed term is [...] either [...] a value or it can take a step according to the evaluation rules."
Preservation: "If a well-typed term takes a step of evaluation, then the resulting term is also well typed."
(Both from (Pierce 2002).)
We also need progress and preservation in IDA. Roughly:
Progress: A question is easy enough to be answered directly or can be decomposed into easier subquestions.
Preservation: The answer from aggregating subquestion answers is just as good as answering the original question.
Let's try to make this more precise.
progress
There are several ways we might interpret "easier". One that seems to have some intuitive appeal is that one question is easier than another if it can be answered with fewer computational resources[3].
Regardless, we'll say that we satisfy progressqa if a question Q is decomposed into subquestions q such that every subquestion q in q is not harder than Q and at least one is easier. This is the most obvious thing that IDA is supposed to provide---a way to make hard problems tractable.
But just noting the existence of such a decomposition isn't enough. We also need to be able to find and carry out such a decomposition more easily than answering the original question. We'll call this property progress↓. progress↑ demands that we be able to find and carry out an aggregation of subquestion answers that's easier than answering the original question.
Each of these three properties is necessary but they are not even jointly sufficient for progress[4]---it could be the case that each of decomposition, answering and aggregation is easier than answering the original question but that all three together are not.
We can also view this graphically. In the figure below representing a single step of decomposition and aggregation, we want it to be the case that the computation represented by the arrow from original Q0 to corresponding answer A0 is harder than any of the computations represented by the other arrows.
progressqa, progress↓ and progress↑ mean that the top arrow from Q0 to A0 represents a more difficult computation than each of the bottom, left, and right arrows, respectively.
preservation
There are also several possible interpretations of "as good as". To start with, let's assume it means that one question and answer pair is just as good as another if they have exactly the same denotation.
We say that a decomposition satisfies preservation↓ if the denotations of (Q,A) and (Q,¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯aggregate(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer(decompose(Q)))) are identical where (Q,A) is a question and answer pair, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯aggregation is an ideal aggregation, and ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer is an ideal answering algorithm. We say that an aggregation satisfies preservation↑ if the denotations of (Q,A) and (Q,aggregate(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯decompose(Q)))) are identical where (Q,A) is a question and answer pair, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯decompose is an ideal decomposition, and ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer is an ideal answering algorithm.
Explained differently, preservation↓ requires that the below diagram commute while assuming that answering and aggregation are ideal. preservation↑ requires that the diagram commute while assuming that answering and decomposition are ideal.
preservation↓ means that the diagram commutes with an ideal bottom and right arrow. preservation↑ mean that the diagram commutes with an ideal bottom and left arrow.
PROGRESS
progressqa actually isn't sufficient for our purposes---it could be the case that a series of decompositions produce easier and easier questions but never actually produce questions that are simple enough for a human to answer directly. We name the requirement that our decompositions eventually produce human-answerable subquestions PROGRESSqa.
PRESERVATION
Now let's relax our definition of "as good as" a bit since it's quite demanding. Instead of requiring that the question and answer pairs have exactly the same denotation, we allow some wiggle room. We could do this in a variety of ways including: (1) suppose there is some metric space of meanings and require that the denotations are within ϵ of each other; (2) require that acting on either question-answer pair produces the same expected utility; (3) require that the utilities produced by acting on each question-answer pair are within ϵ of each other. For the sake of discussion let's assume something like (1) or (3).
Hopefully, the mutatis mutandis for preservation↓ and preservation↑ with this new interpretation of "good enough" is clear enough. (Briefly, the aggregated, answered, decomposition should be within ϵ of the original answer.)
Unfortunately, the new interpretation means that the single-step (i.e. just one level of decomposition and aggregation) properties are no longer sufficient to guarantee multi-step preservation. It could be the case that each step introduces skew less than ϵ but that the cumulative skew between the original question and a fully decomposed set of human-answerable questions exceeds ϵ. We'll call the requirement that the series of decompositions maintain skew less than ϵ, PRESERVATION↓, and that the series of aggregations maintains skew less than ϵ, PRESERVATION↑.
PRESERVATION↓ means that the left hand side of the diagram doesn't break commutativity. PRESERVATION↑ mean that the right-hand side doesn't break commutativity.
Summary
For every question, there must be a full decomposition to human-answerable questions satisfying PROGRESSqa and each decomposed set of questions along the way must satisfy each of progressqa, progress↓, and progress↑. That full decomposition must satisfy PRESERVATION↓ and the corresponding full aggregation must satisfy PRESERVATION↑. Each decomposition and aggregation along the way must satisfy preservation↓ and preservation↑.
progress and preservation properties apply to single steps of decomposition and aggregation. PROGRESS and PRESERVATION properties apply to repeated decomposition and aggregation.
References
Pierce, Benjamin C, and C Benjamin. 2002. Types and Programming Languages. MIT press.
If we settled on a precise notion of "easier", we could specify what would be sufficient. For example, if difficulty just adds, the overall progress requirement would be that the sum of difficulties from decomposition, aggregation and answering is no more than the difficulty from answering the original question in other ways. ↩︎
Overview
This post arose out of my attempts to understand IDA and ways it could fail. It might help you do the same and could provide useful vocabulary for discussing desiderata for IDA.
We want IDA to satisfy progress---decomposition should make answering questions easier---and preservation---semantics should be retained across transformations. We need progress in each decomposition and, furthermore, repeated decompositions must be able to eventually simplify each question such that it can be answered directly by a human. Also, each decomposition and aggregation of questions and answers must introduce no more than a bounded amount of semantic drift and, furthermore, repeated decompositions and aggregations should also introduce no more than a bounded amount of semantic drift.
IDA
Iterated distillation and amplification (henceforth IDA) is a proposal for improving the capability of human-machine systems to suprahuman levels in complex domains where even evaluation of system outputs may be beyond unaugmented human capabilities. For a detailed explanation of the mechanics, I'll refer you to the original paper just linked, section 0 of Machine Learning Projects for Iterated Distillation and Amplification, or one of the many other explanations floating around the Web.
We can view IDA as dynamic programming with function approximation[1] instead of a tabular cache. Just like the cache in dynamic programming, the machine learning component of IDA is a performance optimization. We can excise it and look at just the divide-and-conquer aspect of IDA in our analysis. Then this simplified IDA roughly consists of: (1) repeatedly decomposing tasks into simpler subtasks; (2) eventually completing sufficiently simple subtasks; and (3) aggregating outputs from subtasks into an output which completes the original, undecomposed task. We'll examine this simplified model[2] in the rest of the post. (If you'd like a more concrete description of the divide-and-conquer component of IDA, there's a runnable Haskell demo here.)
Safety is progress plus preservation
For type systems, the slogan is "safety is progress plus preservation". Because we're using this only as a cute analogy and organizing framework, we'll not get into the details. But for type systems:
(Both from (Pierce 2002).)
We also need progress and preservation in IDA. Roughly:
Let's try to make this more precise.
progress
There are several ways we might interpret "easier". One that seems to have some intuitive appeal is that one question is easier than another if it can be answered with fewer computational resources[3].
Regardless, we'll say that we satisfy progressqa if a question Q is decomposed into subquestions q such that every subquestion q in q is not harder than Q and at least one is easier. This is the most obvious thing that IDA is supposed to provide---a way to make hard problems tractable.
But just noting the existence of such a decomposition isn't enough. We also need to be able to find and carry out such a decomposition more easily than answering the original question. We'll call this property progress↓. progress↑ demands that we be able to find and carry out an aggregation of subquestion answers that's easier than answering the original question.
Each of these three properties is necessary but they are not even jointly sufficient for progress[4]---it could be the case that each of decomposition, answering and aggregation is easier than answering the original question but that all three together are not.
We can also view this graphically. In the figure below representing a single step of decomposition and aggregation, we want it to be the case that the computation represented by the arrow from original Q0 to corresponding answer A0 is harder than any of the computations represented by the other arrows.
progressqa, progress↓ and progress↑ mean that the top arrow from Q0 to A0 represents a more difficult computation than each of the bottom, left, and right arrows, respectively.
preservation
There are also several possible interpretations of "as good as". To start with, let's assume it means that one question and answer pair is just as good as another if they have exactly the same denotation.
We say that a decomposition satisfies preservation↓ if the denotations of (Q,A) and (Q,¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯aggregate(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer(decompose(Q)))) are identical where (Q,A) is a question and answer pair, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯aggregation is an ideal aggregation, and ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer is an ideal answering algorithm. We say that an aggregation satisfies preservation↑ if the denotations of (Q,A) and (Q,aggregate(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯decompose(Q)))) are identical where (Q,A) is a question and answer pair, ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯decompose is an ideal decomposition, and ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯answer is an ideal answering algorithm.
Explained differently, preservation↓ requires that the below diagram commute while assuming that answering and aggregation are ideal. preservation↑ requires that the diagram commute while assuming that answering and decomposition are ideal.
preservation↓ means that the diagram commutes with an ideal bottom and right arrow. preservation↑ mean that the diagram commutes with an ideal bottom and left arrow.
PROGRESS
progressqa actually isn't sufficient for our purposes---it could be the case that a series of decompositions produce easier and easier questions but never actually produce questions that are simple enough for a human to answer directly. We name the requirement that our decompositions eventually produce human-answerable subquestions PROGRESSqa.
PRESERVATION
Now let's relax our definition of "as good as" a bit since it's quite demanding. Instead of requiring that the question and answer pairs have exactly the same denotation, we allow some wiggle room. We could do this in a variety of ways including: (1) suppose there is some metric space of meanings and require that the denotations are within ϵ of each other; (2) require that acting on either question-answer pair produces the same expected utility; (3) require that the utilities produced by acting on each question-answer pair are within ϵ of each other. For the sake of discussion let's assume something like (1) or (3).
Hopefully, the mutatis mutandis for preservation↓ and preservation↑ with this new interpretation of "good enough" is clear enough. (Briefly, the aggregated, answered, decomposition should be within ϵ of the original answer.)
Unfortunately, the new interpretation means that the single-step (i.e. just one level of decomposition and aggregation) properties are no longer sufficient to guarantee multi-step preservation. It could be the case that each step introduces skew less than ϵ but that the cumulative skew between the original question and a fully decomposed set of human-answerable questions exceeds ϵ. We'll call the requirement that the series of decompositions maintain skew less than ϵ, PRESERVATION↓, and that the series of aggregations maintains skew less than ϵ, PRESERVATION↑.
PRESERVATION↓ means that the left hand side of the diagram doesn't break commutativity. PRESERVATION↑ mean that the right-hand side doesn't break commutativity.
Summary
For every question, there must be a full decomposition to human-answerable questions satisfying PROGRESSqa and each decomposed set of questions along the way must satisfy each of progressqa, progress↓, and progress↑. That full decomposition must satisfy PRESERVATION↓ and the corresponding full aggregation must satisfy PRESERVATION↑. Each decomposition and aggregation along the way must satisfy preservation↓ and preservation↑.
progress and preservation properties apply to single steps of decomposition and aggregation. PROGRESS and PRESERVATION properties apply to repeated decomposition and aggregation.
References
Pierce, Benjamin C, and C Benjamin. 2002. Types and Programming Languages. MIT press.
Asking whether IDA problems have the optimal substructure and overlapping subproblems that dynamic programming requires also seems fruitful. ↩︎
This should be okay because function approximation only makes the problems of progress and preservation harder. ↩︎
Of course, "computational resources" is a leaky abstraction. ↩︎
If we settled on a precise notion of "easier", we could specify what would be sufficient. For example, if difficulty just adds, the overall progress requirement would be that the sum of difficulties from decomposition, aggregation and answering is no more than the difficulty from answering the original question in other ways. ↩︎