The bound isn't always achievable if you don't know Y. Suppose with uniform over the 6 possible outcomes. You find out that (without loss of generality). You must construct a function st . Because as far as you know, could be 2 or 3, and you have to be able to construct from and . But then we just take the most common output of , and that tells us so .
(If you had some other procedure for calculating X from S and Y, then you can make a new function that does that, and call it S, so an arbitrary function from X to Y+{err} is the most general form of S.
Nice argument - I'm convinced that the bound isn't always achievable. See my response to cousin_it's comment for some related questions.
Yeah, it seems that if Y isn't uniquely determined by X, we can't reversibly erase Y from X. Let's say we flip two coins, X = the first, Y = the sum. Since S depends only on X and some randomness, knowing S is equivalent to knowing some distribution over X. If that distribution is non-informative about Y, it must be (1/2,1/2). But then we can't reconstruct X from S and Y.
Nice example - I'm convinced that the bound isn't always achievable. So the next questions are:
My guess is that optimal performance would be achieved by throwing away all information contained in X about the distribution P[Y|X]. We always know that distribution just from observing X, and throwing away all info about that distribution should throw out all info about Y, and the minimal map interpretation suggests that that's the least information we can throw out.
I might misunderstand something or made a mistake and I'm not gonna try to figure it out since the post is old and maybe not alive anymore, but isn't the following a counter-example to the claim that the method of constructing S described above does what it's supposed to do?
Let X and Y be independent coin flips. Then S will be computed as follows:
X=0, Y=0 maps to uniform distribution on {{0:0, 1:0}, {0:0, 1:1}}
X=0, Y=1 maps to uniform distribution on {{0:0, 1:0}, {0:1, 1:0}}
X=1, Y=0 maps to uniform distribution on {{0:1, 1:0}, {0:1, 1:1}}
X=1, Y=1 maps to uniform distribution on {{0:0, 1:1}, {0:1, 1:1}}
But since X is independent from Y, we want S to contain full information about X (plus some noise perhaps), so the support of S given X=1 must not overlap with the support of S given X=0. But it does. For example, {0:0, 1:1} has positive probability of occurring both for X=1, Y=1 and for X=0, Y=0. i.e. conditional on S={0:0, 1:1}, X is still bernoulli.
Probability as Minimal Map argued that the probability P[q|X] is a minimal representation of the information in data X which is relevant to the query q. In other words, P[q|X] is a perfectly efficient map of some territory based on data X, suitable for the query q. More generally, a full probability distribution P[Q|X] is a perfectly efficient map suitable for any queries about the random variable Q. Bayesian probability tells how to update our map as we gain new information.
What if, for some reason, we wanted to instead throw away some particular information? We still want to represent our map using a probability distribution, but rather than adding information to it (via Bayes’ Rule), we want to remove some information.
Let’s start with an artificial but concrete example.
Coin-Sum Problem
I flip two coins, and observe both outcomes - either 0 or 1 for each. I want to keep as much information as possible, while throwing away all information from the observation relevant to their sum. How should I “update” my distribution over the outcomes?
We’ll write the outcome of our coinflip as B=(B1,B2) (“B” for “bit” or “binary” or “Bernoulli”), and our final information-removed distribution as P[B|B,/∑B] - so the notation /∑B indicates that we should throw out all info about the sum (the “/” is meant to evoke e.g. a group quotient operation). Note that this kind of “remove information” operation does not follow the usual rules of Bayesian probability.
At first, I reason as follows:
… so in order to throw out all information about the sum, we have to throw out all the information from the observations. We effectively don’t update at all, and just keep our prior.
This seems kind of weird from an information-theoretic standpoint. We have 2 bits of information from the observation, and the sum only contains −(14log14+12log12+14log14)=32 bits. It seems like we ought to be able to keep around 12 bit of information, somehow.
The trick turns out to be right at the start of the above reasoning: “my final distribution is a function of the outcomes I saw”. This quietly assumed that our “update” had to be deterministic. We can do better if we allow randomized updates - e.g. if I see (0, 0), then I randomly choose one of several distributions to update to.
Here’s an example of a randomized strategy:
Half the time (when the sum is 1) this strategy conveys 1 bit of information (whether the first or second coin is the 1), so overall we keep 12 bit - exactly the information-theoretic limit.
Generalizing
The general form of the problem:
In the coin example above, we can encode S as: 0 when B is (0,1), 1 when B is (1,0), otherwise a random 50/50 choice between 0 and 1. Calculating P[B|S] then yields the distributions from the previous section. In this case, we can interpret S as the value of coin 1 assuming the sum was 1 - otherwise it's random.
Two obvious questions are existence and uniqueness:
Uniqueness we can answer easily: no, the “optimal” reduced information S is not unique. Counterexample: flip two coins, and throw out all info about whether the sum is odd or even. We have 2 bits of info total, we have to throw out 1 bit of info, and we can keep around 1 bit in (at least) two different ways: just keep the outcome of the first coin, or just keep the outcome of the second coin. Either of these, by itself, contains 1 bit of info and tells us nothing about whether the sum is odd or even.
Existence is trickier.
If we observe both X and Y, then we can definitely construct S, using cousin_it’s method from this comment: for each possible value of Y, S contains an X-value randomly sampled from P[X|Y] - except for the observed Y-value, which maps to the observed X-value. For instance, if we observed (1, 0) in our coin-sum problem, then one possible value of S would be {0:(0,0),1:(1,0),2:(1,1)} - so S tells us that if the sum Y=1, then X = (1, 0). S tells us nothing at all about Y, but if we later re-learn the value of Y, then we can just look up the value of X in S. This implies that S achieves the information-theoretic bound: S and Y together are sufficient to reconstruct X.
However, we may want to throw out whatever information we have about some Y, even when we don’t have complete information about Y. In other words, what we really want is to construct S without knowing Y - i.e. construct S from X alone. I still don’t know whether the information-theoretic bound is always achievable in this case, or what the optimal information content is if the bound isn’t achievable.
Why Is This Interesting?
A lot of tricky problems in decision theory feel like the agent should choose to throw away information. Chris Leong argues that this is a useful way to think about logical counterfactuals in general, and I’ve heard other people express similar intuitions. The approach in this post offers a way to formalize and quantify “forgetting”, in a form potentially useful for these kinds of applications.
Along similar lines: the sort of randomization used in game theory feels different from the sort of “randomness” involved in uncertainty. The former is utilitarian, and used to confuse external opponents. The latter is epistemic, and only relevant to internal beliefs. The randomization involved in throwing away information feels similar to game-theoretic randomness, yet it’s in the sort of informational framework usually used for reasoning about uncertainty - suggesting that these two types of randomness could be handled in a more unified conceptual framework.
For example: suppose agent 1 and agent 2 play a zero-sum game. Agent 2 chooses its action by running a copy of agent 1’s source code, then outputting whatever action beats the copy’s action. What should agent 1 do? One possible approach is for agent 1 to throw away any information about its copy’s action which is contained in its own action - so that the copy’s behavior yields no information about agent 1’s behavior. Randomization then naturally enters picture, due to throwing away information. (This doesn’t seem like quite the right decision-making rule in general, but it feels like something similar could work - the main idea is to make “throw away information” an action in the game itself, and push game-theoretic randomization of choices into the info-throw-away action.) One could imagine re-deriving Nash equilibria via agents throwing away information as an action, without any other source of randomness. Whether that can actually work is another open problem.