So there’s this thing where a system can perform more bits of optimization on its environment by observing some bits of information from its environment. Conjecture: observing an additional bits of information can allow a system to perform at most additional bits of optimization. I want a proof or disproof of this conjecture.
I’ll operationalize “bits of optimization” in a similar way to channel capacity, so in more precise information-theoretic language, the conjecture can be stated as: if the sender (but NOT the receiver) observes bits of information about the noise in a noisy channel, they can use that information to increase the bit-rate by at most bits per usage.
For once, I’m pretty confident that the operationalization is correct, so this is a concrete math question.
Toy Example
We have three variables, each one bit: Action (), Observable (), and outcome (). Our “environment” takes in the action and observable, and spits out the outcome, in this case via an xor function:
We’ll assume the observable bit has a 50/50 distribution.
If the action is independent of the observable, then the distribution of outcome is the same no matter what action is taken: it’s just 50/50. The actions can perform zero bits of optimization; they can’t change the distribution of outcomes at all.
On the other hand, if the actions can be a function of , then we can take either or (i.e. not-), in which case will be deterministically 0 (if we take ), or deterministically 1 (for ). So, the actions can apply 1 bit of optimization to , steering deterministically into one half of its state space or the other half. By making the actions a function of observable , i.e. by “observing 1 bit”, 1 additional bit of optimization can be performed via the actions.
Operationalization
Operationalizing this problem is surprisingly tricky; at first glance the problem pattern-matches to various standard info-theoretic things, and those pattern-matches turn out to be misleading. (In particular, it’s not just conditional mutual information, since only the sender - not the receiver - observes the observable.) We have to start from relatively basic principles.
The natural starting point is to operationalize “bits of optimization” in a similar way to info-theoretic channel capacity. We have 4 random variables:
- “Goal”
- “Action”
- “Observable”
- “Outcome”
Structurally:
(This diagram is a Bayes net; it says that and are independent, is calculated from and and maybe some additional noise, and is calculated from and and maybe some additional noise. So, .) The generalized “channel capacity” is the maximum value of the mutual information , over distributions .
Intuitive story: the system will be assigned a random goal , and then take actions (as a function of observations ) to steer the outcome . The “number of bits of optimization” applied to is the amount of information one could gain about the goal by observing the outcome .
In information theoretic language:
- is the original message to be sent
- is the encoded message sent in to the channel
- is noise on the channel
- is the output of the channel
Then the generalized “channel capacity” is found by choosing the encoding to maximize .
I’ll also import one more assumption from the standard info-theoretic setup: is represented as an arbitrarily long string of independent 50/50 bits.
So, fully written out, the conjecture says:
Let be an arbitrarily long string of independent 50/50 bits. Let , , and be finite random variables satisfying
and define
Then
Also, one slightly stronger bonus conjecture: is at most under the unconstrained maximal .
(Feel free to give answers that are only partial progress, and use this space to think out loud. I will also post some partial progress below. Also, thankyou to Alex Mennen for some help with a couple conjectures along the path to formulating this one.)
Eliminating G
The standard definition of channel capacity makes no explicit reference to the original message G; it can be eliminated from the problem. We can do the same thing here, but it’s trickier. First, let’s walk through it for the standard channel capacity setup.
Standard Channel Capacity Setup
In the standard setup, A cannot depend on O, so our graph looks like
… and we can further remove O entirely by absorbing it into the stochasticity of Y.
Now, there are two key steps. First step: if A is not a deterministic function of G, then we can make A a deterministic function of G without reducing I(G;Y). Anywhere A is stochastic, we just read the random bits from some independent part of G instead; Y will have the same joint distribution with any parts of G which A was reading before, but will also potentially get some information about the newly-read bits of G as well.
Second step: note from the graphical structure that A mediates between G and Y. Since A is a deterministic function of G and A mediates between G and Y, we have I(G;Y)=I(A;Y).
Furthermore, we can achieve any distribution P[A] (to arbitrary precision) by choosing a suitable function A(G).
So, for the standard channel capacity problem, we have P[G,A,Y]=P[G]P[A|G]P[Y|A], and we can simplify the optimization problem:
(maxP[A|G]I(G;Y))=(maxP[A]I(A;Y))
Note that this all applies directly to our conjecture, for the part where actions do not depend on observations.
That’s how we get the standard expression for channel capacity. It would be potentially helpful to do something similar in our problem, allowing for observation of O.
Our Problem
The step about determinism of A carries over easily: if A is not a deterministic function of G and O, then we can change A to read random bits from an independent part of G. That will make A a deterministic function of G and O without reducing I(G;Y).
The second step fails: A does not mediate between G and Y.
However, we can define a “Policy” variable
π:=(o↦A(G,o))
π is also a deterministic function of G, and π does mediate between G and Y. And we can achieve any distribution over policies (to arbitrary precision) by choosing a suitable function A(G,O).
So, we can rewrite our problem as
(maxP[A|G,O]I(G;Y))=(maxP[π]I(π;Y))
In the context of our toy example: π has two possible values, (o↦o) and (o↦¯o). If π takes the first value, then Y is deterministically 0; if π takes the second value, then Y is deterministically 1. So, taking the distribution P[π] to be 50/50 over those two values, our generalized “channel capacity” is at least 1 bit. (Note that we haven’t shown that no P[π] achieves higher value in the maximization problem, which is why I say “at least”.)
Back to the general case: our conjecture can be expressed as
Δ=(maxP[π]I(π;Y))−(maxP[A]I(A;Y))≤H(O)
where the first optimization problem uses the factorization
P[π,O,Y]=P[π]P[O]P[Y|A=π(O),O]
and the second optimization problem uses the factorization
P[A,O,Y]=P[A]P[O]P[Y|A,O]