This post is part of the Asymptotic Logical Uncertainty series. Here, I will explain how to talk about other parts of logical uncertainty in the framework of Asymptotic Logical Uncertainty (ALU).
Much work has been done in Logical Uncertainty on the question on how to choose a random complete logical extension of a given incomplete theory. This is equivalent to the question of how to choose a coherent probability assignment as described in section 3 here.
One of the most important properties of a coherent probability assignment is that the probabilities be computably approximable. We can define computably approximable in terms of the machines used in ALU.
Let P be a coherent probability assignment. Let M be a Turing machine which on input N runs quickly and outputs a probability M(N), which represents the probability assigned to ϕN. For each sentence ψ, let {ψn} be defined by ψ0=ψ and ψn+1=¬¬ψn.
If for every ψ, we have limn→∞M(sn)=P(ψ), then we say M computably approximates P and P is computably approximable.
Now, given your favorite computably approximable coherent probability assignment P, we can ask the following question:
Is it possible for a machine M to simultaneously get the correct limiting probability for all irreducible patterns, and converge to probability P(ψ) on {ψn}?
Note that BenfordLearner gets the correct probability whenever ψ is provable or disprovable, because then {ψn} is itself an irreducible pattern with probability 1 or 0. However, when ψ is independent, it is not clear whether or not the probabilities assigned to {ψn} even converge.
Getting the correct probability on irreducible patterns is only affected by the probabilities assigned to provable or disprovable sentences, while further getting the correct probabilities on {ψn} is only affected by the probabilities assigned to independent sentences. It is therefore feasible to satisfy both conditions at once, but we cannot easily combine them, because there is no way to determine which sentences are independent.
Using the ALU framework, we can also generalize notions from the theory of random logical extensions. Just to give an example, given a machine M and a simple increasing infinite sequence {sn}, such that ϕsn↔ϕsn+1 for all n, one can ask whether or not M(sn) always converges. This is a generalization of requirement that for coherent P, P(ϕ)=P(ψ) whenever ϕ↔ψ.
There are some choices to be made about how to generalize the conditions for coherence, and it is not clear to me right now if those choices matter, and what is the best way to define them. However, it seems that using the ALU framework, we can define a notion of "uniform coherence," which requires not only that the limits of probabilities of individual sentences is coherent, but that the limiting process respects relations between sentences as well. I will post more about this after further investigation.
This post is part of the Asymptotic Logical Uncertainty series. Here, I will explain how to talk about other parts of logical uncertainty in the framework of Asymptotic Logical Uncertainty (ALU).
Much work has been done in Logical Uncertainty on the question on how to choose a random complete logical extension of a given incomplete theory. This is equivalent to the question of how to choose a coherent probability assignment as described in section 3 here.
One of the most important properties of a coherent probability assignment is that the probabilities be computably approximable. We can define computably approximable in terms of the machines used in ALU.
Let P be a coherent probability assignment. Let M be a Turing machine which on input N runs quickly and outputs a probability M(N), which represents the probability assigned to ϕN. For each sentence ψ, let {ψn} be defined by ψ0=ψ and ψn+1=¬¬ψn.
If for every ψ, we have limn→∞M(sn)=P(ψ), then we say M computably approximates P and P is computably approximable.
Now, given your favorite computably approximable coherent probability assignment P, we can ask the following question:
Is it possible for a machine M to simultaneously get the correct limiting probability for all irreducible patterns, and converge to probability P(ψ) on {ψn}?
Note that BenfordLearner gets the correct probability whenever ψ is provable or disprovable, because then {ψn} is itself an irreducible pattern with probability 1 or 0. However, when ψ is independent, it is not clear whether or not the probabilities assigned to {ψn} even converge.
Getting the correct probability on irreducible patterns is only affected by the probabilities assigned to provable or disprovable sentences, while further getting the correct probabilities on {ψn} is only affected by the probabilities assigned to independent sentences. It is therefore feasible to satisfy both conditions at once, but we cannot easily combine them, because there is no way to determine which sentences are independent.
Using the ALU framework, we can also generalize notions from the theory of random logical extensions. Just to give an example, given a machine M and a simple increasing infinite sequence {sn}, such that ϕsn↔ϕsn+1 for all n, one can ask whether or not M(sn) always converges. This is a generalization of requirement that for coherent P, P(ϕ)=P(ψ) whenever ϕ↔ψ.
There are some choices to be made about how to generalize the conditions for coherence, and it is not clear to me right now if those choices matter, and what is the best way to define them. However, it seems that using the ALU framework, we can define a notion of "uniform coherence," which requires not only that the limits of probabilities of individual sentences is coherent, but that the limiting process respects relations between sentences as well. I will post more about this after further investigation.