This post is a long answer to this comment by cousin_it:
Logical uncertainty is weird because it doesn't exactly obey the rules of probability. You can't have a consistent probability assignment that says axioms are 100% true but the millionth digit of pi has a 50% chance of being odd.
I'd like to attempt to formally define logical uncertainty in terms of probability. Don't know if what results is in any way novel or useful, but.
Let X be a finite set of true statements of some formal system F extending propositional calculus, like Peano Arithmetic. X is supposed to represent a set of logical/mathematical beliefs of some finite reasoning agent.
Given any X, we can define its "Obvious Logical Closure" OLC(X), an infinite set of statements producible from X by applying the rules and axioms of propositional calculus. An important property of OLC(X) is that it is decidable: for any statement S it is possible to find out whether S is true (S∈OLC(X)), false ("~S"∈OLC(X)), or uncertain (neither).
We can now define the "conditional" probability P(*|X) as a function from {the statements of F} to [0,1] satisfying the axioms:
Axiom 1: Known true statements have probability 1:
P(S|X)=1 iff S∈OLC(X)
Axiom 2: The probability of a disjunction of mutually exclusive statements is equal to the sum of their probabilities:
"~(A∧B)"∈OLC(X) implies P("A∨B"|X) = P(A|X) + P(B|X)
From these axioms we can get all the expected behavior of the probabilities:
P("~S"|X) = 1 - P(S|X)
P(S|X)=0 iff "~S"∈OLC(X)
0 < P(S|X) < 1 iff S∉OLC(X) and "~S"∉OLC(X)
"A=>B"∈OLC(X) implies P(A|X)≤P(B|X)
"A<=>B"∈OLC(X) implies P(A|X)=P(B|X)
etc.
This is still insufficient to calculate an actual probability value for any uncertain statement. Additional principles are required. For example, the Consistency Desideratum of Jaynes: "equivalent states of knowledge must be represented by the same probability values".
Definition: two statements A and B are indistinguishable relative to X iff there exists an isomorphism between OLC(X∪{A}) and OLC(X∪{B}), which is identity on X, and which maps A to B.
[Isomorphism here is a 1-1 function f preserving all logical operations: f(A∨B)=f(A)∨f(B), f(~~A)=~~f(A), etc.]
Axiom 3: If A and B are indistinguishable relative to X, then P(A|X) = P(B|X).
Proposition: Let X be the set of statements representing my current mathematical knowledge, translated into F. Then the statements "millionth digit of PI is odd" and "millionth digit of PI is even" are indistinguishable relative to X.
Corollary: P(millionth digit of PI is odd | my current mathematical knowledge) = 1/2.
I agree with what you're trying to do, but I don't think this proposed construction does it. There are a lot of really complicated statements of propositional calculus which turn out to be either tautologically true or tautologically false, and I'd like to be able to speak of uncertainty of those statements as well.
Constructions like this (or like fuzzy logic) don't capture the principle that I take to be self-evident when discussing bounded agents, that new deductions don't instantly propagate globally: if I've deduced A and also deduced (A implies B), I may not yet have deduced B. (All the more so when we make complicated examples.)
I don't think the construction actually requires instant propagation. It requires a certain calculation to be made when you wish to assign a probability to a particular statement, and this calculation is provably finite.
In your example, you are free to have X contain "A" and "A=>B", and not contain "B", as long as you don't assign a probability to B. When you wish to do so, you have to do the calculation, which will find that B∈OLC(X), and so will assign P(B)=1. Assigning any other value would indeed be inconsistent for any reasonable definition of probability, because if you know that A=>B, then you have to know that P(A)≤P(B), and then if P(A)=1, then P(B) must also be 1.