Thanks for the feedback. I agree that reductio ad absurdum is the weakest of the examples I gave, but let me try to justify it anyways: if X is a fully general counterargument, then we can use it to argue against true statements as well as false ones. So applying X without any additional justification would lead to patently false conclusions, and therefore (by reductio ad absurdum) X is not a valid form of reasoning. Perhaps this is not the best word for it, but it is similar to a very pervasive idea in mathematics, where when formulating possible approaches to prove a theorem, a key criterion is whether those approaches can distinguish between the theorem and similar statements that are known or suspected to be false.
ETA: And yes, I agree that specific examples are good!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm confused by a couple minor points here, also:
The paper asks for a "probability distribution over models of L". In fact, for many languages L, models of L form a proper class. Does this cause measure-theoretic difficulties? It seems like this might force mu to be zero on all sufficiently large models (otherwise you can do some sort of transfinite induction to get sets of unbounded measure) but I'm not very good at crazy set theory stuff.
At one point the authors state "We would like P(forall phi in L' <blah>)". I thought we were in a first-order language and therefore couldn't quantify over propositions?
It's not immediately clear to me that this actually constructs a measure on the set of theories: that is, if S is the set of all complete consistent theories, it's not clear to me that for the mu we construct by martingale, mu(S) = 1 (or even that mu(S) != 0). Mightn't additivity break when we take the limit and get a whole theory rather than just an incomplete bag of axioms?