"Mathematical logic is the science of algorithm evaluating algorithms."

Do you think that this is an overly generalizing, far fetched proposition or an almost trivial statement? Wait, don't cast your vote before the end of this short essay!

It is hard to dispute that logic is the science of drawing correct conclusions. It studies theoretically falsifiable rules the lead to derivations which are verifiable in a finite amount of mechanical steps, even by machines.

Let's dig a bit deeper by starting to focusing on the "drawing correct conclusions" part, first. It implies the logic deals both with abstract rules: "drawing" and their meaning: "conclusions".

Logic is not just about mindless following of certain rules (that's algebra :P) its conclusions must have truth values that refer to some "model". Take for example De Morgan's law:

not (a and b) = (not a) or (not  b).

It can be checked for each four possible substitutions of boolean values: a = false, b = false; a = false, b = true; .... If we agreed upon the standard meaning of the logical not, or and and operators, then we must conclude that the De Morgan's rule is perfect. On the other hand: the similar looking rule

not (a and b) = (not a) and (not b)

can be easily refuted by evaluating for the counterexample a = false, b = true.

Generally: in any useful mathematical system, logical conclusions should work in some interesting model.

However, in general, total verifiability is way too much to ask. As Karl Popper pointed out: often one must be satisfied with falsifiability of scientific statements as a criterion. For example, the following logical rule

not (for each x: F(x)) <=> exists x : not F(x)

is impossible to check for every formula F.  Not directly checkable statements include all those where the set of all possible substitutions is (potentially) infinite.

This observation could be formalized by saying that a mapping from abstract to concrete is required. This thinking can be made precise by formalizing further: logicians study the connection between axiom systems and their models.

But wait a minute: is not there something fishy here? How could the process of formalization be formalized? Is not this so kind of circular reasoning? In fact, it is deeply circular on different levels. The most popular way of dealing with this Gordian knot is simply by cutting it using some kind of naive set theory in which the topmost level of arguments are concluded.

This may be good enough for educational purposes, but if one in the following questions should always be asked: What is the basis of those top level rules? Could there be any mistake there? Falsifiability always implies an (at least theoretical) possibility of our rules being wrong even at the topmost level. Does using a meta-level set theory mean that there is some unquestionable rule we have to accept as God given, at least there?

Fortunately, the falsifiability of axioms has another implication: it requires only a simple discrete and finite process to refute them: an axiom or rule is either falsified or not. Checking counterexamples is like experimental physics: any violation must be observable and reproducable. There are no fuzzy, continuous measurements, here. There are only discrete manipulations. If no mistakes were made and some counterexample is found, then one of the involved logical rules or axioms had to be wrong.

Let's squint our eyes a bit and look the at whole topic from a different perspective: In traditional view, axiom systems are considered to be sets of rules that allow for drawing conclusions. This can also be rephrased as: Axiom systems can be cast into programs that take chains of arguments as parameter and test them for correctness.

This seems good enough for the the formal rules, but what about the semantics (their meaning)?

In order to define the semantics, there need to be map to something else formally checkable, ruled by symbolics, which is just information processing, again. Following that path, we end up with with the answer: A logical system is a program that checks that certain logical statements hold for the behavior of another program (model).

This is just the first simplification and we will see how the notions of "check", "logical statement", and "holds" can also be dropped and replaced by something more generic and natural, but first let's get concrete and let us look at the two most basic examples:

  1. Logical Formulas: The model is the set of all logical formulas given in terms of binary and, or and the not function. The axiom system consists of a few logical rules like the commutativity, associativity and distributivity of and and or, the De Morgan laws as well as not rule not (not a)=a. (The exact choice of the axiom system is somewhat arbitrary and is not really important here.) This traditional description can be turned into: The model is a program that takes a boolean formulas as input and evaluates them on given (input) substitutions. The axiom system can be turned as a program that given a chain of derivations of equality of boolean formulas checks that each step some rewritten in terms of one of the predetermined axioms, "proving" the equality of the formulas at the beginning and end of the conclusion chain. Note that given two supposedly equal boolean formulas ("equality proven using the axioms"), a straightforward loop around the model could check that those formulas are really equivalent and therefore our anticipated semantic relationship between the axiom system and its model is clearly falsifiable.
  2. Natural numbers: Our model is the set of all arithmetic expressions using +, *, - on natural numbers, predicates using < and = on arithmetic expressions and any logical combination of predicates. For the axiom system, we can choose the set of Peano axioms. Again: We can turn the model into a program by evaluating any valid formula in the model. The axiom system can again be turned into a program that checks the correctness of logic chains of derivations. Although we can not check verify the correctness of every Peano formula in the model by substituting each possible value, we still can have an infinite loop where we could arrive at every substitution within a finite amount of steps. That is: falsifiability still holds.

The above two examples can be easily generalized to saying that: "A logical system is a program that checks that certain kinds of logical statements can be derived for the behavior of another program (model)."

Let us simplify this a bit further. We can easily replace the checking part altogether by noticing that given a statement, the axiom system checker program can loop over all possible chains of derivations for the statement and its negation. If that program stops then the logical correctness of the statement (or its negation) was established, otherwise it is can neither be proven nor refuted by those axioms. (That is: it was independent of those axioms.)

Therefore, we end up just saying: "A logical system is program that correctly evaluates whether a certain logical statement holds for the behavior of another program, (whenever the evaluator program halts.)"

Unfortunately, we still have the relatively fuzzy "logical statement" term in our description. Is this necessary?

In fact, quantifiers in logical statements can be easily replaced by loops around the evaluating program that check for the corresponding substitutions. Functions and relations can be resolved similarly. So we can extend the model program from a simply substitution method to one searching for some solution by adding suitable loops around it. The main problem is that those loops may be infinite. Still, they always loop over a countable set. Whenever there is a matching substitution, the search program will find it. We have at least falsifiability, again. For example, the statement of Fermat's Last Theorem is equivalent to the statement that program the searches for its solution never stops.

In short: the statement "logical statement S holds for a program P" can always be replaced by either "program P' stops" or "program P' does not stop" (where P' is a suitable program using P as subroutine, depending on the logical statement). That is we finally arrive at our original statement:

"Mathematical logic is the science of algorithm evaluating algorithms [with the purpose making predictions on their (stopping) behavior.]"

Simple enough, isn't it? But can this be argued backward? Can the stopping problem always be re-cast as a model theoretic problem on some model? In fact, it can. Logic is powerful and the semantics of the the working of a programs is easily axiomatized. There really is a relatively straightforward  one-to-one correspondence between model theory and algorithms taking the programs as arguments to predict their (stopping) behavior.

Still, what can be gained anything by having such an algorithmic view?

First of all: it has a remarkable symmetry not explicitly apparent by the traditional view point: It is much less important which program is the model and which is the "predictor". Prediction goes both ways: the roles of the programs are mostly interchangeable. The distinction between concrete and abstract vanishes.

Another point is the conceptual simplicity: the need for a assuming a meta-system vanishes. We treat the algorithmic behavior as the single source of everything and look for symmetric correlations between the behavior of programs instead of postulating higher and higher levels of meta-theories.

Also, the algorithmic view has quite a bit of simplifying power due to its generality:

Turing's halting theorem is conceptually very simple. (Seems almost too simple to be interesting.) Goedel's theorem, on the other hand, looks more technical and involved. Still, by the above correspondence, Turing's halting theorem is basically just a more general version Goedel's theorem. By the correspondence between the algorithmic and logical view, Turing's theorem can be translated to: every generic enough axiom system (corresponding to a Turing complete language) has at least one undecidable statement (input program, for which the checking program does not stop.) The only technically involved part of Goedel's theorem is to check that its corresponding program is Turing complete. However, having the right goal in mind, it is not hard to check at all.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 6:44 AM

You are looking for essences at every turn, ask what certain things "really are", but the world doesn't work like that. There are often multiple explanations (interpretations), none of them "more real" than the other. And in math that is more apparent than elsewhere, so morphisms of every kind between all kinds of structures get introduced to describe particular explanations, relations between different things that show in what way do properties of one thing tell something about the other.

Exactly. There are enough connections within mathematics that almost any field can be seen as an abstruse development of almost any other field, and indeed, most mathematicians have the belief that their field is the most truly fundamental one. What we just saw here is a computer scientist looking to suborn mathematical logic as a special case; naturally, the opposite interpretation is just as valid.

You seem to be limiting yourself to countably infinite models when you say you can (possibly infinitely) loop over all the items in checking quantifications.

I like your theme/intuition (aesthetically; in terms of it bearing fruit, I don't see it yet), but the meat of this post is riddled with unclear (to me) statements. It would help if we could see a more formal version.

I expected the reaction with the countably infinite models, but I did not expect it to be the first. ;)

I wanted to get into that in the write up, but I had to stop at some point. The argument is that in order to have scientific theories, we need to have falsifiability, which means that this always necessarily deals with a discrete projection of the physical world. On the other hand so far every discrete manifestation of physical systems seemed to be able to be modelled by Turing machines. (This assumption is called the Church thesis.) If you add these two, then you arrive by the above conclusion.

Another reason for it not being a problem is that every (first order) axiom system has a countable model by the theorem of Lowenheim-Skolem. (Yes, even the theory of real numbers have a countable model, which is also known as the Skolem paradox.)

Actually, I don't think that the technical content of the write-up is novel, it is probably something that was already clear to Turing, Church, Goedel and von Neumann in the 40-50ies. IMO, the takeaway is a certain, more pragmatic, way of thinking about logic in the age information processing, instead of sticking to an outdated intuition. Also the explicit recognition that the domains of mathematical logic and AI are much more directly connected than it would seem naively.

I know that material, but couldn't figure out how your informal descriptions map to it in every case. (When following a proof, I like to make sure I understand every step, and fix problems before proceeding). If this isn't intended as new, then I won't sweat it.

The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.

A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.

I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one's own world view.

I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.

An admirable response.

Here's the first place you lost me:

In order to define the semantics, there need to be map to something else formally checkable, ruled by symbolics, which is just information processing, again. Following that path, we end up with with the answer: A logical system is a program that checks that certain logical statements hold for the behavior of another program (model).

This isn't precise enough for me to agree that it's true. Is it a claim? A new definition?

Next:

For the axiom system, we can choose the set of Peano axioms.

I took "axiom system" to mean a set of axioms and rules for deducing their consequences (I know you probably mean the usual first-order logic deduction rules).

Although we can not check verify the correctness of every Peano formula in the model by substituting each possible value, we still can have an infinite loop where we could arrive at every substitution within a finite amount of steps.

What do you mean? Are you talking about something other process than the proof checking program?

I'll wait for clarifications on those points before proceeding.

In order to define the semantics,... This isn't precise enough for me to agree that it's true. Is it a claim? A new definition?

First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:

"In order to define semantics, we need to define a map between the logic to the model ...."

It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: "For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold."

Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.

The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.

Although we can not check verify the correctness ... .. What do you mean? Are you talking about something other process than the proof checking program?

It is more model checking: Given a statement like "For each x: F(x)", since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.

I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:

foreach x: exists y: prime(y) and prime(y+2)

Still this expression can be cast into the form: "T(x) halts for every input x.", where T is the program that searches for the next pair of twin primes both bigger than n.

But what about longer sequences of quantifier, like

f = foreach a: exists b: foreach c: exists d: .... F(a,b,c,d,...)

Can this still be casted into the form "T(x) halts for every input x"? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.

In fact, there is a neat trick to avoid that: You can define a program T'(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T'(n) does halt for every natural number n.

Which means we manage to hide the cascade of quantifiers within algorithm T'. Cool, hugh?

[This comment is no longer endorsed by its author]Reply

Logic is not just about mindless following of certain rules (that's algebra :P) its conclusions must have truth values that refer to some "model".

Seems like it would make exactly as much sense to say the opposite; what use is algebra if the quantities in your equation don't refer to some model?

Of course, if you're exploring algebra for its own ends, you can just leave everything abstract... but then again, you can also do that with logic.

I kept waiting for the rationality-related punchline for this post, but it never came.

[+]Thomas12y-110