I'm not sure I understand this partitioning of data into a random portion and a deterministic portion. Conceptually, the idea of KC "overestimating" deterministic structure is necessary - the axiom we start with is that data is generated by a single black box process, either that process has a program available to it that generates deterministic bits or it has access to a true RNG for sampling. The ability to shift between these modes (or mix processes) seems to to a largely circular problem (i.e if A is a random process and B is a deterministic one, then we must now estimate when A is "active" and when B is "active" - this mixing itself can also be a random process).
I'm not sure I understand this partitioning of data into a random portion and a deterministic portion. Conceptually, the idea of KC "overestimating" deterministic structure is necessary - the axiom we start with is that data is generated by a single black box process, either that process has a program available to it that generates deterministic bits or it has access to a true RNG for sampling. The ability to shift between these modes (or mix processes) seems to to a largely circular problem (i.e if A is a random process and B is a deterministic one, then we must now estimate when A is "active" and when B is "active" - this mixing itself can also be a random process).