# Bayesian probability theory as extended logic -- a new result

I have a new paper that strengthens the case for strong Bayesianism, a.k.a. One Magisterium Bayes. The paper is entitled "From propositional logic to plausible reasoning: a uniqueness theorem." (The preceding link will be good for a few weeks, after which only the preprint version will be available for free. I couldn't come up with the $2500 that Elsevier makes you pay to make your paper open-access.)

Some background: E. T. Jaynes took the position that (Bayesian) probability theory is an extension of propositional logic to handle degrees of certainty -- and appealed to Cox's Theorem to argue that probability theory is the **only** viable such extension, "the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind." This position is sometimes called *strong Bayesianism*. In a nutshell, frequentist statistics is fine for reasoning about frequencies of repeated events, but that's a very narrow class of questions; most of the time when researchers appeal to statistics, they want to know what they can conclude with what degree of certainty, and that is an *epistemic* question for which Bayesian statistics is the right tool, according to Cox's Theorem.

You can find a "guided tour" of Cox's Theorem here (see "Constructing a logic of plausible inference"). Here's a very brief summary. We write *A | X* for "the reasonable credibility" (plausibility) of proposition *A* when *X* is known to be true. Here *X* represents whatever information we have available. We are **not** at this point assuming that *A | X* is any sort of probability. A system of plausible reasoning is a set of rules for evaluating *A | X*. Cox proposed a handful of intuitively-appealing, qualitative requirements for any system of plausible reasoning, and showed that these requirements imply that any such system is just probability theory in disguise. That is, there necessarily exists an order-preserving isomorphism between plausibilities and probabilities such that *A | X*, after mapping from plausibilities to probabilities, respects the laws of probability.

Here is one (simplified and not 100% accurate) version of the assumptions required to obtain Cox's result:

*A | X*is a real number.*(A | X) = (B | X)*whenever*A*and*B*are logically equivalent; furthermore, (*A | X) ≤ (B | X)*if*B*is a tautology (an expression that is logically true, such as (*a or not a)*).- We can obtain (
*not A | X)*from*A | X*via some non-increasing function*S*. That is,*(not A | X) = S(A | X)*. - We can obtain (
*A and B | X)*from*(B | X)*and*(A | B and X)*via some continuous function*F*that is strictly increasing in both arguments:*(A and B | X) = F((A | B and X), B | X)*. - The set of triples
*(x,y,z)*such that*x = A|X*,*y = (B | A and X)*, and*z = (C | A and B and X)*for some proposition*A*, proposition*B*, proposition*C*, and state of information*X*, is dense. Loosely speaking, this means that if you give me any*(x',y',z')*in the appropriate range, I can find an*(x,y,z)*of the above form that is arbitrarily close to*(x',y',z')*.

*already true of propositional logic*continue to be true in our extended logic for plausible reasoning. Here are the alternative requirements:

- If
*X*and*Y*are logically equivalent, and*A*and*B*are logically equivalent assuming*X*, then*(A | X) = (B | Y)*. - We may define a new propositional symbol
*s*without affecting the plausibility of any proposition that does not mention that symbol. Specifically, if*s*is a propositional symbol not appearing in*A*,*X*, or*E*, then (*A | X) = (A | (s ↔ E) and X)*. - Adding irrelevant background information does not alter plausibilities. Specifically, if
*Y*is a satisfiable propositional formula that uses no propositional symbol occurring in*A*or*X*, then (*A | X) = (A | Y and X)*. - The implication ordering is preserved: if
*A → B*is a logical consequence of*X*, but*B → A*is not, then then*A | X*<*B | X*; that is,*A*is strictly less plausible than*B*, assuming*X*.

**do not**assume that

*A | X*is a real number. Item 4 above assumes only that there is some

*partial*ordering on plausibility values: in

*some*cases we can say that one plausibility is greater than another.

*X*to be a propositional formula: all the background knowledge to which we have access is expressed in the form of logical statements. So, for example, if your background information is that you are tossing a six-sided die, you could express this by letting

*s1*mean "the die comes up 1,"

*s2*mean "the die comes up 2," and so on, and your background information

*X*would be a logical formula stating that exactly one of

*s1*, ...,

*s6*is true, that is,

*(s1 or s2 or s3 or s5 or s6) and*

*not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and*

*not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and*

*not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and*

*not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and*

*not (s4 and s5) and not (s4 and s6) and not (s5 and s6).*

*A*or

*X*. Let

*n*be the number of rows in this table for which

*X*evaluates true. Let

*m*be the number of rows in this table for which both

*A*and

*X*evaluate true. If

*P*is the function that maps plausibilities to probabilities, then

*P(A | X) = m / n*.

*a*and

*b*are atomic propositions (not decomposable in terms of more primitive propositions), and suppose that we only know that at least one of them is true; what then is the probability that

*a*is true? Start by enumerating all possible combinations of truth values for

*a*and

*b*:

*a*false,*b*false:*(a or b)*is false,*a*is false.*a*false,*b*true :*(a or b)*is true,*a*is false.*a*true,*b*false:*(a or b)*is true,*a*is true.*a*true,*b*true :*(a or b)*is true,*a*is true.

*(a or b)*is true, and in 2 of these cases (3 and 4)

*a*is also true. Therefore,

*P(a | a or b) = 2/3.*

The probability of an event is the ratio of the number of cases favorable to it, to the number of possible cases, when there is nothing to make us believe that one case should occur rather than any other, so that these cases are, for us, equally possible.

*X*. We can simply drop the problematic phrase “these cases are, for us, equally possible.” The phrase “there is nothing to make us believe that one case should occur rather than any other” means that we possess no additional information that, if added to

*X*, would expand by differing multiplicities the rows of the truth table for which

*X*evaluates true.

## Comments (39)

BestI'm working on extending probability to predicate calculus and your work will be very precious, thanks!

If you haven't already, I would suggest you read Carnap's book, The Logical Foundations of Probability (there's a PDF of it somewhere online). As I recall, he ran into some issues with universally quantified statements -- they end up having zero probability in his system.

*1 point [-]Cox's probability is essentially probability defined on a Boolean algebra (the Lindenbaum-Tarski algebra of propositional logic).

Kolmogorov's probability is probability defined on a sigma-complete Boolean algebra.

If I can show that quantifiers are related to sigma-completeness (quantifiers are adjunctions in the proper pair of categories, but I've yet to look into that), then I can probably lift the equivalnce via the Loomis-Sikorski theorem back to the original algebras, and get exactly when a Cox's probability can be safely extended to predicate logic.

That's the dream, anyway.

I'd be interested in reading what you come up with once you're ready to share it.

One thing you might consider is whether sigma-completeness is really necessary, or whether a weaker concept will do. One can argue that, from the perspective of constructing a logical system, only

computablecountable unions are of interest, rather than arbitrary countable unions.*1 point [-]I don't think that changes much about the core argument. Chapman wrote in Probability theory does not extend logic :

*5 points [-]My response to Chapman is here: http://bayesium.com/wp-content/uploads/2017/07/commentary-on-meaningness.pdf .

*3 points [-]I do not know enough about logic to be able to evaluate the argument. But from the Outside View, I am inclined to be skeptical about David Chapman:

DAVID CHAPMAN

"Describing myself as a Buddhist, engineer, scientist, and businessman (...) and as a pop spiritual philosopher“

Web-book in progress: Meaningness

Tagline: Better ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.

EDWIN THOMPSON JAYNES

Professor of Physics at Washington University

Most cited works:

Information theory and statistical mechanics - 10K citations

Probability theory: The logic of science - 5K citations

The tone of David Chapman's refutation:

E. T. Jaynes (...) was completely confused about the relationship between probability theory and logic. (...) He got confused by the word “Aristotelian”—or more exactly by the word “non-Aristotelian.” (...) Jaynes is just saying “I don’t understand this, so it must all be nonsense.”

Something you are not taking into account is that Chapman was born a lot later, Any undergraduate physicist can tell you where Newton went wrong.

I think difference in date of birth (1922 vs ~1960) is less important than difference of date of publication (2003 vs ~2015).

On the Outside View, is criticism 12 years after publication more likely to be valid than criticism levelled immediately? I do not know. On one hand, science generally improves over time. On the other hand, if a particular work get the first criticism after many years, it could mean that the work is of higher quality.

Chapman's argument? Do you know enough logic to understand Yudkowsky's arguemtn, then?

No, I do know what Yudkowsky's argument is. Truth be told, I probably would be able to evaluate the arguments, but I have not considered it important. Should I look into it?

I care about whether "The Outside View" works as a technique for evaluating such controversies.

*1 point [-]From the outside view, David Chapman is a MIT Phd who published papers on artificial intelligence.

From the outside view, I think AI credentials qualify a person more than physics credentials.

Thank you for pointing this out. I did not do my background check far enough back in time. This substantially weakens my case.

I am still inclined to be skeptical, and I have found another red flag. As far as I can tell, E. T. Jaynes is generally very highly regarded, and the only person who is critical of his book is David Chapman. This is just from doing a couple of searches on the Internet.

There are many people studying logic and probability. I would expect some of them would find it worthwhile to comment on this topic if they agreed with David Chapman.

Chapman doesn't criticise Jaynes directly, he criticises what he calls Pop Bayesianism.

*0 points [-]I should clarify that I am referring to the section David Chapman calls: "Historical appendix: Where did the confusion come from?". I read it as a criticism of both Jaynes and his book.

I don't think it's a good sign for a book if there isn't anybody to be found that criticizes it.

ksvanhorn's response that defends Jaynes still grants:

I think the view that Eliezer argues is that you can basically do all relevant reasoning with Bayes and not that you can't to reason well about the properties of mathematical models with Bayes.

FWIW Loads of people criticise Jaynes' book all the time.

It's still a bad argument to judge a book based on the fact that one is unable to find criticism.

Could you post a link to a criticism similar to David Chapman?

The primary criticism I could find was the errata. From the Outside View, the errata looks like a number of mathematically minded people found it to be worth their time to submit corrections. If they had thought that E. T. Jaynes was hopelessly confused, they would not have submitted corrections of this kind.

I can't link to a criticism that makes the same points as Chapman, but my favourite criticism of Jaynes is the paper "Jaynes's maximum entropy prescription and probability theory" by Friedman and Shimony, criticising the MAXENT rule. It's behind a paywall, but there's an (actually much better) description of the same result in Section 5 of "The constraint rule of the maximum entropy principle" by Uffink. (It actually came out before PT:TLOS was published, but Jaynes' description of MAXENT doesn't change so the criticism still applies).

Yes! From the Outside View, this is exactly what I would expect substantial, well-researched criticism to look like. Appears very scientific, contains plenty of references, is peer-reviewed and published in "Journal of Statistical Physics" and has 29 citations.

Friedman and Shimonys criticism of MAXENT is in stark contrast to David Chapmans criticism of "Probability Theory".

FWIW I think that Davud Chapman's criticism is correct as far as it goes, but I don't think that it's very damning. Propositional logic is indeed a "logic" and it's worthwhile enough for probability theory to extend it. Trying to look at predicate logic probabilisticly would be interesting but it's not necessary.

Chapman wasn't even attempting to write an original paper, and in fact points out early on that he is repeating well known (outside LW) facts.

*1 point [-]I think it is a good sign for a

Mathematicsbook that there isn't anybody to be found that criticizes itexcept people with far inferior credentials.Chapman on Twitter about the original post:

*1 point [-]I've seen that article before, but can't quite understand it. Is there really a use for mixed sentences like "the probability that the probability that all ravens are black is 0.5 is 0.5"? It seems like both quantifiers and meta-probabilities are unnecessary, I can say all I want just by having a prior over states of the world with all its ravens. Relationships among multiple objects get folded into that as well.

Sure, but you can't actually hold the probability vector over all states with ravens. So you move up a level and summarize that set of probabilities to a smaller (and less precise) set.

All uncertainty is map, not territory. Anytime you are using probability, you're acknowledging that you're a limited calculator that cannot hold the complete state of the universe. If you could, you wouldn't need probability, you'd actually know the thing.

Meta-models are useful when specific models get cumbersome. Likewise meta-probability.

You don't need meta-probability to compress priors. For example, a uniform prior on [0,1] talks about an uncountable set of events, but its description is tiny and doesn't use meta-probabilities.

*0 points [-]And it's a special case.

No, you can't. Not in practice.

It's the same deal as with AIXI -- quite omnipotent in theory, can't do much of anything in reality. Take a

real-lifeproblem and show me your prior over all the states of the world.*0 points [-]All priors are over the state of the world, just coarse-grained :-) So any practical application of Bayesian statistics should suffice for your request.

*1 point [-]Any practical application does not give me an opportunity to "say all I want just by having a prior over states of the world" because it doesn't involve such a prior. A practical application sets out a model with some parameters and invites me to specify (preferably in a neat analytical form) the prior for these parameters.

How can you "have" an infinitely complex prior?

*1 point [-]It doesn't have to be infinitely complex. Let's say there are only ten ravens and ten crows, each of which can be black or white. Chapman says I can't talk about them using probability theory because there are two kinds of objects, so I need meta-probabilities and quantifiers and whatnot. But I don't need any of that stuff, it's enough to have a prior over possible worlds, which would be finite and rather small.

Only you need to keep switching priors to deal with one finite and small problem after another. Whatever that is, it is not

strongBayes.*0 points [-]That amounts to saying that Bayes works in finite, restricted cases, which no one is disputing. The thing is that you scheme doesn't work in the general case.

Suggest the paper be listed on library genesis. Or whichever service you choose.

*0 points [-]Why is #4 above "less than" and not "less than or equal to"?

::thinks a bit::

What this is saying is, if there are logically possible worlds where A is false and B is true, but no logically possible worlds where A is true and B is false, then A is strictly less likely than B - that all logically possible worlds have nonzero probability. This is a pretty strong assumption...

Epistemic probabilities / plausibilities are not properties of the external world; they are properties of the information you have available. Recall that the premise X contains all the information you have available to assess plausibilities. If X does not rule out a possible world, what basis do you have for assigning it 0 probability? Put another way, how do you get to 100% confidence that this possible world is in fact impossible, when you have no information to rule it out?