Seems to be a typo here:
(s1 or s2 or s3 or s5 or s6) and not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and not (s4 and s5) and not (s4 and s6) and not (s5 and s6).
I think you mean to add "or s4" on the first line:
(s1 or s2 or s3 or s4 s5 or s6) and not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and not (s4 and s5) and not (s4 and s6) and not (s5 and s6).
I'm working on extending probability to predicate calculus and your work will be very precious, thanks!
If you haven't already, I would suggest you read Carnap's book, The Logical Foundations of Probability (there's a PDF of it somewhere online). As I recall, he ran into some issues with universally quantified statements -- they end up having zero probability in his system.
As I recall, he ran into some issues with universally quantified statements -- they end up having zero probability in his system.
Cox's probability is essentially probability defined on a Boolean algebra (the Lindenbaum-Tarski algebra of propositional logic).
Kolmogorov's probability is probability defined on a sigma-complete Boolean algebra.
If I can show that quantifiers are related to sigma-completeness (quantifiers are adjunctions in the proper pair of categories, but I've yet to look into that), then I can probably lift the equivalnce via the Loomis-Sikorski theorem back to the original algebras, and get exactly when a Cox's probability can be safely extended to predicate logic.
That's the dream, anyway.
I'd be interested in reading what you come up with once you're ready to share it.
One thing you might consider is whether sigma-completeness is really necessary, or whether a weaker concept will do. One can argue that, from the perspective of constructing a logical system, only computable countable unions are of interest, rather than arbitrary countable unions.
I don't think that changes much about the core argument. Chapman wrote in Probability theory does not extend logic :
Probability theory can be viewed as an extension of propositional calculus. Propositional calculus is described as “a logic,” for historical reasons, but it is not what is usually meant by “logic.”
[...]
Probability theory by itself cannot express relationships among multiple objects, as predicate calculus (i.e. “logic”) can. The two systems are typically combined in scientific practice.
My response to Chapman is here: http://bayesium.com/wp-content/uploads/2017/07/commentary-on-meaningness.pdf .
I do not know enough about logic to be able to evaluate the argument. But from the Outside View, I am inclined to be skeptical about David Chapman:
DAVID CHAPMAN
"Describing myself as a Buddhist, engineer, scientist, and businessman (...) and as a pop spiritual philosopher“
Web-book in progress: Meaningness
Tagline: Better ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.
EDWIN THOMPSON JAYNES
Professor of Physics at Washington University
Most cited works:
Information theory and statistical mechanics - 10K citations
Probability theory: The logic of science - 5K citations
The tone of David Chapman's refutation:
E. T. Jaynes (...) was completely confused about the relationship between probability theory and logic. (...) He got confused by the word “Aristotelian”—or more exactly by the word “non-Aristotelian.” (...) Jaynes is just saying “I don’t understand this, so it must all be nonsense.”
Something you are not taking into account is that Chapman was born a lot later, Any undergraduate physicist can tell you where Newton went wrong.
I think difference in date of birth (1922 vs ~1960) is less important than difference of date of publication (2003 vs ~2015).
On the Outside View, is criticism 12 years after publication more likely to be valid than criticism levelled immediately? I do not know. On one hand, science generally improves over time. On the other hand, if a particular work get the first criticism after many years, it could mean that the work is of higher quality.
From the outside view, David Chapman is a MIT Phd who published papers on artificial intelligence.
From the outside view, I think AI credentials qualify a person more than physics credentials.
Thank you for pointing this out. I did not do my background check far enough back in time. This substantially weakens my case.
I am still inclined to be skeptical, and I have found another red flag. As far as I can tell, E. T. Jaynes is generally very highly regarded, and the only person who is critical of his book is David Chapman. This is just from doing a couple of searches on the Internet.
There are many people studying logic and probability. I would expect some of them would find it worthwhile to comment on this topic if they agreed with David Chapman.
As far as I can tell, E. T. Jaynes is generally very highly regarded, and the only person who is critical of his book is David Chapman.
I don't think it's a good sign for a book if there isn't anybody to be found that criticizes it.
ksvanhorn's response that defends Jaynes still grants:
I agree with Chapman that probability theory does not extend the predicate calculus. I had thought this too obvious to mention, but perhaps it needs emphasizing for people who haven’t studied mathematical logic. Jaynes, in particular, was not versed in mathematical logic, so when he wrote about “probability theory as extended logic” he failed to properly identify which logic it extended.
[...]
My view is that the role of the predicate calculus in rationality is in model building. It gives us the tools to create mathematical models of various aspects of our world, and to reason about the properties of these models. The predicate calculus is indispensable for doing mathematics.
I think the view that Eliezer argues is that you can basically do all relevant reasoning with Bayes and not that you can't to reason well about the properties of mathematical models with Bayes.
It's still a bad argument to judge a book based on the fact that one is unable to find criticism.
Could you post a link to a criticism similar to David Chapman?
The primary criticism I could find was the errata. From the Outside View, the errata looks like a number of mathematically minded people found it to be worth their time to submit corrections. If they had thought that E. T. Jaynes was hopelessly confused, they would not have submitted corrections of this kind.
I can't link to a criticism that makes the same points as Chapman, but my favourite criticism of Jaynes is the paper "Jaynes's maximum entropy prescription and probability theory" by Friedman and Shimony, criticising the MAXENT rule. It's behind a paywall, but there's an (actually much better) description of the same result in Section 5 of "The constraint rule of the maximum entropy principle" by Uffink. (It actually came out before PT:TLOS was published, but Jaynes' description of MAXENT doesn't change so the criticism still applies).
Yes! From the Outside View, this is exactly what I would expect substantial, well-researched criticism to look like. Appears very scientific, contains plenty of references, is peer-reviewed and published in "Journal of Statistical Physics" and has 29 citations.
Friedman and Shimonys criticism of MAXENT is in stark contrast to David Chapmans criticism of "Probability Theory".
FWIW I think that Davud Chapman's criticism is correct as far as it goes, but I don't think that it's very damning. Propositional logic is indeed a "logic" and it's worthwhile enough for probability theory to extend it. Trying to look at predicate logic probabilisticly would be interesting but it's not necessary.
Chapman wasn't even attempting to write an original paper, and in fact points out early on that he is repeating well known (outside LW) facts.
I don't think it's a good sign for a book if there isn't anybody to be found that criticizes it.
I think it is a good sign for a Mathematics book that there isn't anybody to be found that criticizes it except people with far inferior credentials.
As far as I can tell, E. T. Jaynes is generally very highly regarded, and the only person who is critical of his book is David Chapman.
Chapman doesn't criticise Jaynes directly, he criticises what he calls Pop Bayesianism.
I should clarify that I am referring to the section David Chapman calls: "Historical appendix: Where did the confusion come from?". I read it as a criticism of both Jaynes and his book.
I do not know enough about logic to be able to evaluate the argument.
Chapman's argument? Do you know enough logic to understand Yudkowsky's arguemtn, then?
No, I do know what Yudkowsky's argument is. Truth be told, I probably would be able to evaluate the arguments, but I have not considered it important. Should I look into it?
I care about whether "The Outside View" works as a technique for evaluating such controversies.
Chapman on Twitter about the original post:
Not relevant to the propositional vs predicate issue I wrote about, but looks like an interesting alternative approach to Cox’s result.
I've seen that article before, but can't quite understand it. Is there really a use for mixed sentences like "the probability that the probability that all ravens are black is 0.5 is 0.5"? It seems like both quantifiers and meta-probabilities are unnecessary, I can say all I want just by having a prior over states of the world with all its ravens. Relationships among multiple objects get folded into that as well.
Sure, but you can't actually hold the probability vector over all states with ravens. So you move up a level and summarize that set of probabilities to a smaller (and less precise) set.
All uncertainty is map, not territory. Anytime you are using probability, you're acknowledging that you're a limited calculator that cannot hold the complete state of the universe. If you could, you wouldn't need probability, you'd actually know the thing.
Meta-models are useful when specific models get cumbersome. Likewise meta-probability.
It doesn't have to be infinitely complex. Let's say there are only ten ravens and ten crows, each of which can be black or white. Chapman says I can't talk about them using probability theory because there are two kinds of objects, so I need meta-probabilities and quantifiers and whatnot. But I don't need any of that stuff, it's enough to have a prior over possible worlds, which would be finite and rather small.
Only you need to keep switching priors to deal with one finite and small problem after another. Whatever that is, it is not strong Bayes.
That amounts to saying that Bayes works in finite, restricted cases, which no one is disputing. The thing is that you scheme doesn't work in the general case.
I can say all I want just by having a prior over states of the world with all its ravens
No, you can't. Not in practice.
It's the same deal as with AIXI -- quite omnipotent in theory, can't do much of anything in reality. Take a real-life problem and show me your prior over all the states of the world.
All priors are over the state of the world, just coarse-grained :-) So any practical application of Bayesian statistics should suffice for your request.
So any practical application of Bayesian statistics should suffice for your request.
Any practical application does not give me an opportunity to "say all I want just by having a prior over states of the world" because it doesn't involve such a prior. A practical application sets out a model with some parameters and invites me to specify (preferably in a neat analytical form) the prior for these parameters.
Why is #4 above "less than" and not "less than or equal to"?
::thinks a bit::
What this is saying is, if there are logically possible worlds where A is false and B is true, but no logically possible worlds where A is true and B is false, then A is strictly less likely than B - that all logically possible worlds have nonzero probability. This is a pretty strong assumption...
Epistemic probabilities / plausibilities are not properties of the external world; they are properties of the information you have available. Recall that the premise X contains all the information you have available to assess plausibilities. If X does not rule out a possible world, what basis do you have for assigning it 0 probability? Put another way, how do you get to 100% confidence that this possible world is in fact impossible, when you have no information to rule it out?
I have a new paper that strengthens the case for strong Bayesianism, a.k.a. One Magisterium Bayes. The paper is entitled "From propositional logic to plausible reasoning: a uniqueness theorem." (The preceding link will be good for a few weeks, after which only the preprint version will be available for free. I couldn't come up with the $2500 that Elsevier makes you pay to make your paper open-access.)
Some background: E. T. Jaynes took the position that (Bayesian) probability theory is an extension of propositional logic to handle degrees of certainty -- and appealed to Cox's Theorem to argue that probability theory is the only viable such extension, "the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind." This position is sometimes called strong Bayesianism. In a nutshell, frequentist statistics is fine for reasoning about frequencies of repeated events, but that's a very narrow class of questions; most of the time when researchers appeal to statistics, they want to know what they can conclude with what degree of certainty, and that is an epistemic question for which Bayesian statistics is the right tool, according to Cox's Theorem.
You can find a "guided tour" of Cox's Theorem here (see "Constructing a logic of plausible inference"). Here's a very brief summary. We write A | X for "the reasonable credibility" (plausibility) of proposition A when X is known to be true. Here X represents whatever information we have available. We are not at this point assuming that A | X is any sort of probability. A system of plausible reasoning is a set of rules for evaluating A | X. Cox proposed a handful of intuitively-appealing, qualitative requirements for any system of plausible reasoning, and showed that these requirements imply that any such system is just probability theory in disguise. That is, there necessarily exists an order-preserving isomorphism between plausibilities and probabilities such that A | X, after mapping from plausibilities to probabilities, respects the laws of probability.
Here is one (simplified and not 100% accurate) version of the assumptions required to obtain Cox's result: