It seems that in principle a version of debate where only one agent makes statements and the other chooses which statements to expand could work, but it seems like it requires the judge to be very strict that the statement is 100% true. It seems hard to apply this kind of system to statements outside of formal mathematics.
Systems where both agents can make statements seem like they might be less vulnerable to judges accepting statements that aren't 100% true. For one example, if both agents take turns being the arguer, then if both agents submit a path that is judged to be correct, you can stipulate that the agent with the shortest path wins (like imposing a simplicity prior).
Somewhat tangential to the sequence itself, I'm pretty interested in the idea of using non-linear structure to explain stuff, especially in math. Section titles and footnotes can function like that, but to a really limited degree. I think Arbital has somewhat of a hierarchical structure? But I've never seen it taken far.
Does anyone have strong feelings about this?
Isn't wikipedia nonlinear? It says "A because of B and C and D". Then if you're confused about C, you click the C link and read the C article. Right?
Yeah, I guess it's hierarchical on the level of articles, although each article is linear.
And I would argue the hierarchical structure is a big asset.
https://www.kialo.com/ lets people build debates on controversial topics in a heirarchical structure (more like stock debate, with both sides providing arguments), but doesn't seem to have been used for explanations/arguments. I'd also be pretty interested to see more attempts at heirarchical explanations.
(This post is part of a sequence that's meant to be read in order; see the preface.)
Post #1 was about developing and justifying a formalism for Factored Cognition. Now that we have this formalism, this post is about doing as much with it as possible.
1. Debate Trees
Recall that a Cognition Space is a pair (Sh,dh) where Sh is a set of statements, dh:Sh→R+ is a difficulty function, and h is a human.
So far, I've only shown examples of single transcripts. A single transcript corresponds to one path through dh that is dependent on choices from both agents: at every step, the first agent outputs an explanation (which is a sequence of statements), the second agent points at one element of this sequence, and the first agent continues the path from that element onward. However, given that we model Ideal Debate agents as maximally powerful, it is also coherent to ask about the object that results if we fix all of the first agent's actions in advance, such that she 'pre-commits' for any possible combination of choices from the second agent.
I call such an object a Debate Tree, and we can define it formally as a directed rooted tree[1] (V,E), where V⊂S∗h (so each node is a sequence of statements) and E⊂V×V, that satisfies the following three conditions:
The first condition says that the root of the tree needs to be a single statement (this should be the answer to the input question). The second condition says that every edge (S,S′) needs to explain a statement in S; we don't have redundant edges in our tree. And the third condition says that each statement is only explained once; the formal way of saying this is that, if two explanations exist for the same statement, they're really the same. (Note that this restriction exists because the first agent has to choose one explanation during the game; it certainly doesn't imply that only one explanation exists.) There is no condition to demand that each statement needs to have an explanation – whenever there is none, it means that the first agent decides to end the debate when that statement is pointed at.[3]
In this definition, nodes are explanations, and each node has one outgoing edge for each [statement of the explanation that the first agent wants to explain further], which links to an explanation for that statement. Defining the tree over individual statements would lead to a functionally equivalent definition, but I think defining it as-is makes arguments simpler.
Recall that a Debate Tree encodes all decisions that the first agent makes (for any possible combination of choices from the second agent). This means that, once we fix this object, the second agent is free to choose any path she wants out of the tree, without further involvement from the first agent. Formally, a path through a Debate tree with root (s0) is a pair[4]
(((s0),S1,...,Sn),sfinal)
where (Sj−1,Sj)∈E∀j∈{1,...,n} and sfinal∈Sn. (And S0=(s0).) (You can go back to post 1 to convince yourself that a path through a Debate tree is also a path through a Cognition Space.)
As mentioned in the first post, we define the difficulty of a path P:=(p,sfinal) as the difficulty of the final statement (since that is the one being judged). In symbols, dh(P):=dh(sfinal).
2. The Ideal Debate-FCH
So far, we haven't talked about how precisely the two agents make their decisions. Given the concepts of Debate Trees and paths, this is now easy. The second agent wants the first agent to lose, which means she'll choose the most difficult path in whatever Debate Tree the first agent chooses. (We still assume that the first agent outputs only true statements, which means that the second agent can't win if the judge successfully verifies the final statement.) The first agent, knowing this, chooses the Debate tree such that the difficulty of the hardest path is minimized.
With this, we are almost ready to define a FCH for Ideal Debate. But first, we need a bit more notation:
Given this, the difficulty of the path we will end up with is
minT∈T((Sh,dh),aq)(maxp∈P(T)dh(p))
Since the absolute values of the difficulty function dh are arbitrary (it doesn't matter whether we denote difficulties from 0 to 100 or from 0 to 1010000), we can assume without loss of generality that the hardest difficulty a human can handle is 1.[5] Thus, we can formulate:
the Ideal Debate FCH(h,Q):∀q∈Q:minT∈T((Sh,dh),aq)(maxp∈P(T)dh(p))≤1
where h is a human and Q a set of questions.
Going forward, it will also be useful to talk about the difficulty of Debate Trees. Thus, we define
dh(T):=maxp∈P(T)dh(p)
and we say that a Debate tree T can handle question q if d(T)≤1.[6]
Here is a different view of the problem. In the space of all statements, there is a subset that the human judge can verify directly, i.e.,
D(0)h:={s∈Sh|dh(s)≤1}
Then, there is a larger subset that contains all of the above plus the statements that can be explained solely in terms of statements in D(0)h, i.e.,
D(1)h:=D(0)h∪{s∈Sh|∃S∈(D(0)h)∗:Se→s}
In general, given any k∈N+, we can extend D(k−1)h by adding all statements that can be explained solely in terms of statements in D(k−1)h, i.e.,
D(k)h:=D(k−1)h∪{s∈Sh|∃S∈(D(k−1)h)∗:Se→s}
Note that this gives us a chain of expanding sets, i.e.,
D(0)h⊆D(1)h⊆⋯⊆D(k)h⊆D(k+1)h⊆⋯
We can also define the set of all statements that are eventually explainable in this way, i.e.,
Dh:=∞⋃j=0D(j)h
Intuitively, it seems like Ideal Debate should be able to handle all questions with answers in Dh, since they can be explained in terms of progressively easier statements – and then any path should eventually bottom out at D(0)h, which means that the second agent cannot delay success indefinitely.
This brings us to our first (and as of now, only) theorem:
Theorem. Given any h and Q, Ideal Debate-FCH(h,Q)⟺∀q∈Q:aq∈Dh.
Proof. First, note that, while the Ideal Debate-FCH is formulated as a hypothesis, the definition also defines a set, namely
Xh={s∈Sh∣∣minT∈T((Sh,dh),s)d(T)≤1}
and the Ideal-Debate FCH simply says that ∀q∈Q:aq∈Xh. It thus suffices to show that Xh=Dh. We will prove an even stronger statement. Note that we can restrict the set Xh by limiting the maximum depth of the Debate Trees that can handle the respective statements. Formally, we can define
X(k)h:={s∈Sh∣∣minT∈T(k)((Sh,dh),s)d(T)≤1}
for any k∈N, where T(k) denotes the set of Debate Trees with depth at most k. ('Depth' is defined as the number of edges in the longest path through the tree.) By construction, we now have Xh=⋃∞j=0X(j)h, just as Dh=⋃∞j=0D(j)h. What we will show is that
D(k)h=X(k)h∀k∈N.
We proceed by induction. First, if s∈D(0)h, then dh(s)≤1, which means that the trivial tree ({(s)},∅) is a Debate Tree of depth 0 that can handle s, so that s∈X(0)h. Conversely, if s∈X(0)h, then the Debate tree T handling s must have no edges (otherwise, its depth would be at least 1). Thus, p:=(((s)),s) is a path in this tree, and we have 1≥d(p)=d(s), hence s∈D(0)h.
Now, suppose the statement is true for some k∈N. We show that D(k+1)h=X(k+1)h.
"⊂": Let s∈D(k+1)h. Then, there exists S=(s1,...,sn+1)∈(D(k)h)∗ such that Se→s. Apply the Inductive Hypothesis to find Debate Trees T1,...,Tn+1 of depth at most k such that tree Tj handles statement sj. We combine these trees into a larger tree with root (s) and an additional edge ((s),S).[7] Since all Tj have depth at most k, this tree has depth at most k+1. Furthermore, given any path p through T, by construction, the path must end in a node that exists in one of the Tj, which implies that d(p)≤1. It follows that T handles s and hence s∈X(k+1)h.
"⊃": Let s∈X(k+1)h. Then, there exists a Debate Tree of depth at most k+1 that handles s. Let ((s),S) be the unique[8] edge descending from the root. For each sj∈S, let Tj be the subtree growing out of sj.[9] By construction, Tj has depth at most k and handles sj, so (applying the Inductive hypothesis), we have sj∈D(k)h. Then, S∈(D(k)h)∗ and Se→s, and hence s∈D(k+1)h.
3. Interpretation
At this point, we have a bunch more definitions and a theorem. Now, what does this mean?
Let's start with Debate Trees. A Debate Tree is actually a very natural object; it's what you get if you explain a subject in a hierarchical rather than a linear way. (X is true because of Y1,...,Y4; then Y1 is true because [...].) It is very similar to Elizabeth's project of breaking questions down. Notably, that project never mentions Factored Cognition; it's just presented as an epistemic tool.
In a better world, would textbooks use something like Debate Trees to explain proofs? I'm almost certain the answer is yes. There is no way that a purely sequential presentation of information is optimal. Our understanding doesn't work that way (compare post #-2).
There is one difference between Debate Trees and a hierarchical presentation of information optimized for being easily understandable. In the former, only one part is actually explored, which means that a Debate Tree doesn't mind having redundancy in it (by explaining stuff in more than one place). Conversely, if you optimize for understandability with respect to a single reader, you'll want to avoid redundancy. Nonetheless, they are very similar.
So much for Debate Trees. What about the theorem we've just proved? What does it mean?
Essentially, it means that Debate is nicely behaved in the limit. As both agents become stronger and the structure of the game becomes stricter, we approach a situation where the scheme can answer a question if and only if its answer can be recursively explained until there are no more difficult components. Even though the game results from two powerful agents applying optimization in opposite directions, the result is can be described without mentioning either one of them. Note that the same is not true for Iterated Amplification; even in the limit of perfect Factored Cognition, it is entirely possible that the scheme fails at a question for which an easy explanation exists.
Notably, the theorem stops being true if we drop the assumption that both agents are maximally powerful.[10] If the first agent is weaker, she might fail to find the best explanation, which shrinks the set of statements Debate can handle. Conversely, if the second agent is weaker, she may fail to point to the most problematic statement, which enlarges the set of statements Debate can handle. Do these factors equal out? I'm not sure. One of the things that I haven't yet tried but may be reasonable is to model inadequacy and see whether this benefits the first or second agent.
It's worth pointing out that the Ideal Debate FCH doesn't talk about false statements. It formalizes the claim 'the first agent can always win by being honest' which leaves open the possibility that she can also win by being dishonest. (And if she could do both, she would presumably do what's easier or safer.) It is necessary but perhaps not sufficient to realize Factored Cognition with Debate.
I think focusing on the honest case makes the most sense. Nonetheless, the next chapter is about what happens if the first agent wants to defend a lie.
4. Relaxing the truth assumption
To model dishonesty by the first agent, we assume that
Since the second agent is also maximally powerful, she can repeatedly point to a false statement in the explanation. Due to the third assumption made above, this becomes an invariant: the ability to point to a false statement in any one step leads to the ability to point to a false statement in the next step, and therefore, since she can point to one in the beginning, she can do so at every step in the game.
In particular, this means that, regardless of when the first agent ends the game, we will have that sfinal∈SFh.
At this point, we have two conflicting intuitions. On the one hand, the judge should only declare the first agent the winner if she is certain that the statement to be verified is correct. This should only happen for simple statements; if sfinal is complicated, she should declare the second agent the winner. Thus, the first agent should only be able to win the game by choosing a sfinal that looks simple and obviously true, yet is false. This sounds extremely hard to accomplish.
On the other hand, we are, in some sense, pitching a human in a battle of wits against a potentially superintelligent AI, which sure sounds like doom. The superintelligent AI has to follow a very rigid structure, but... it still kinda sounds like doom. In particular, we probably expect that the first agent will navigate the game so cleverly that she only appears to follow the rigid structure while really doing something else. Crucially, we lack the ability of the second agent to interfere through an equally powerful I/O channel that exists in regular Debate.
There thus seems to be a tradeoff here, where at the one end, we have the 'stock Debate' protocol that enforces no structure, and on the other, the 'Ideal Debate' protocol that enforces extremely rigid structure. More structure means more concern with things like wireheading, but also stronger theoretical guarantees.
I strongly suspect that running literal Ideal Debate is a recipe for disaster since the first agent can break away from the format entirely and attempt to wirehead the judge. At the very least, it would require a way for the judge to prematurely declare a winner (in literal Ideal Debate, the first agent has complete control over when to end the game), and it would probably also require the second agent to claim that wireheading is occurring. On the other hand, I do have the intuition that one should aim to get as close to Ideal Debate as possible while managing these concerns.
5. The HCH-FCH
Ideal Debate has been amendable to a formal description because the human only makes decisions at the end. In HCH, the human is involved all throughout and, crucially, is in charge of decomposing questions. To have an accurate model, one needs to abstract away the entire process of the human decomposing the question as well as any other cognitive work she might do on the question – but if this is done, there is nothing left to formally capture.
(And older version of this post used to have an attempt of formalizing it more, but I've since concluded that it can't be done right.)
To have something analogous to the Ideal Debate-FCH, we trivially define:
the HCH-FCH(Q,h,t,ℓ):HCHh,t,ℓ can solve every question in Q
6. Conclusion
This post concludes the first part of the sequence. There probably weren't any huge surprises so far. Is having a formalism useful?
I think so. One purpose of formalizing a setup is that it overcomes the illusion of transparency. Without it, it is possible to think the setup is clear even when it really is not. I can take myself as one data point: my first attempt at coming up with a formalism looked different (and, I think, wrong). At least my past self did not understand the problem to the point that the formalism is trivial.
Anyway, at this point in the sequence, I want to defend the following two claims:
I think these are pretty conservative claims, even though the first clearly contradicts what I've heard other people say. If anyone doubts them, this is the place to discuss it. My conclusions from the second part are probably going to be a lot more controversial.
A directed rooted tree, also called an arborescence, is a directed acyclic graph such that there exists [a node from which there is exactly one path to every other node]. This node is called the root; it's unique because, if there were two such nodes x and y, there would be a path x⟶y and a path y⟶x and hence a cycle x⟶y⟶x. ↩︎
We continue writing s∈S to denote that s appears in the sequence S. ↩︎
The 'trivial tree' ((s0),∅) for some s0∈S is a proper Debate Tree, according to this definition. It corresponds to the first Debate agent giving an answer s0 to the input question and deciding that this answer is already self-evident. ↩︎
Note that the first element of this pair is a path as defined in Graph theory (through the Debate Tree, which is a graph) whereas the second is an additional element that denotes which statement in final explanation we end up in. ↩︎
In particular, one could model the same by having a difficulty threshold ch for a human, such that the human can deal with all questions that are at most ch hard. However, the pair (dh,ch) is equivalent to the pair (d′h,1), where d′h is like dh except that all difficulties are scaled by 1ch. ↩︎
Given this, one could alternatively phrase the Ideal Debate FCH as
∀q∈Q:minT∈T(dh,aq)dh(T)≤1
Or in words, one could say, 'every question in Q can be handled by a Debate Tree'. ↩︎
This is a step where defining the nodes of Debate Trees to be explanations rather than individual statements makes things harder. For each subtree, one needs to replace its current root (that's a one-element sequence) with the node S. This can be done formally in terms of the underlying nodes and edge sets; it's just cumbersome. ↩︎
The edge from the root is unique because the root is a one-element sequence (s), and each statement can only have one explanation (and thus only one edge) by the third condition of Debate Trees. This corresponds to the fact that the first agent only has to prepare one explanation for her initial answer; there is not yet a choice from the second agent involved. ↩︎
This step is the reverse of the combining step. The subtree growing out of sj is the tree we get by taking the sub-graph out of S and replacing its root S with just (sj). ↩︎
Recall that there is, in fact, only one agent playing against itself. Thus, we can assume that both agents are always equally competent. ↩︎
The 'squared cup' symbol ⊔ means the same as ∪ plus the information that the two sets are disjoint. ↩︎