This post is part of the Asymptotic Logical Uncertainty series. Here, I will explain how to talk about other parts of logical uncertainty in the framework of Asymptotic Logical Uncertainty (ALU).

Much work has been done in Logical Uncertainty on the question on how to choose a random complete logical extension of a given incomplete theory. This is equivalent to the question of how to choose a coherent probability assignment as described in section 3 here.

One of the most important properties of a coherent probability assignment is that the probabilities be computably approximable. We can define computably approximable in terms of the machines used in ALU.

Let be a coherent probability assignment. Let be a Turing machine which on input runs quickly and outputs a probability , which represents the probability assigned to . For each sentence , let be defined by and .

If for every , we have then we say computably approximates and is computably approximable.

Now, given your favorite computably approximable coherent probability assignment , we can ask the following question:

Is it possible for a machine to simultaneously get the correct limiting probability for all irreducible patterns, and converge to probability on ?

Note that BenfordLearner gets the correct probability whenever is provable or disprovable, because then is itself an irreducible pattern with probability 1 or 0. However, when is independent, it is not clear whether or not the probabilities assigned to even converge.

Getting the correct probability on irreducible patterns is only affected by the probabilities assigned to provable or disprovable sentences, while further getting the correct probabilities on is only affected by the probabilities assigned to independent sentences. It is therefore feasible to satisfy both conditions at once, but we cannot easily combine them, because there is no way to determine which sentences are independent.

Using the ALU framework, we can also generalize notions from the theory of random logical extensions. Just to give an example, given a machine and a simple increasing infinite sequence , such that for all , one can ask whether or not always converges. This is a generalization of requirement that for coherent , whenever .

There are some choices to be made about how to generalize the conditions for coherence, and it is not clear to me right now if those choices matter, and what is the best way to define them. However, it seems that using the ALU framework, we can define a notion of "uniform coherence," which requires not only that the limits of probabilities of individual sentences is coherent, but that the limiting process respects relations between sentences as well. I will post more about this after further investigation.

New Comment
2 comments, sorted by Click to highlight new comments since:

Does refer to the index of ? I'm pretty sure I know what you mean by , but just checking.

That's right. Or, it was right until just now when I fixed the notation to Thanks.