Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Jonathan_Graehl 23 February 2012 05:50:58AM 2 points [-]

An admirable response.

Here's the first place you lost me:

In order to define the semantics, there need to be map to something else formally checkable, ruled by symbolics, which is just information processing, again. Following that path, we end up with with the answer: A logical system is a program that checks that certain logical statements hold for the behavior of another program (model).

This isn't precise enough for me to agree that it's true. Is it a claim? A new definition?


For the axiom system, we can choose the set of Peano axioms.

I took "axiom system" to mean a set of axioms and rules for deducing their consequences (I know you probably mean the usual first-order logic deduction rules).

Although we can not check verify the correctness of every Peano formula in the model by substituting each possible value, we still can have an infinite loop where we could arrive at every substitution within a finite amount of steps.

What do you mean? Are you talking about something other process than the proof checking program?

I'll wait for clarifications on those points before proceeding.

Comment author: Christian_Szegedy 24 February 2012 02:47:37AM *  0 points [-]

In order to define the semantics,... This isn't precise enough for me to agree that it's true. Is it a claim? A new definition?

First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:

"In order to define semantics, we need to define a map between the logic to the model ...."

It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: "For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold."

Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.

The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.

Although we can not check verify the correctness ... .. What do you mean? Are you talking about something other process than the proof checking program?

It is more model checking: Given a statement like "For each x: F(x)", since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.

I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:

foreach x: exists y: prime(y) and prime(y+2)

Still this expression can be cast into the form: "T(x) halts for every input x.", where T is the program that searches for the next pair of twin primes both bigger than n.

But what about longer sequences of quantifier, like

f = foreach a: exists b: foreach c: exists d: .... F(a,b,c,d,...)

Can this still be casted into the form "T(x) halts for every input x"? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.

In fact, there is a neat trick to avoid that: You can define a program T'(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T'(n) does halt for every natural number n.

Which means we manage to hide the cascade of quantifiers within algorithm T'. Cool, hugh?

Comment author: Jonathan_Graehl 23 February 2012 03:01:54AM 0 points [-]

I know that material, but couldn't figure out how your informal descriptions map to it in every case. (When following a proof, I like to make sure I understand every step, and fix problems before proceeding). If this isn't intended as new, then I won't sweat it.

Comment author: Christian_Szegedy 23 February 2012 03:53:40AM *  3 points [-]

The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.

A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.

I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one's own world view.

I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.

Comment author: Jonathan_Graehl 22 February 2012 07:42:14PM 3 points [-]

You seem to be limiting yourself to countably infinite models when you say you can (possibly infinitely) loop over all the items in checking quantifications.

I like your theme/intuition (aesthetically; in terms of it bearing fruit, I don't see it yet), but the meat of this post is riddled with unclear (to me) statements. It would help if we could see a more formal version.

Comment author: Christian_Szegedy 23 February 2012 02:01:29AM *  1 point [-]

I expected the reaction with the countably infinite models, but I did not expect it to be the first. ;)

I wanted to get into that in the write up, but I had to stop at some point. The argument is that in order to have scientific theories, we need to have falsifiability, which means that this always necessarily deals with a discrete projection of the physical world. On the other hand so far every discrete manifestation of physical systems seemed to be able to be modelled by Turing machines. (This assumption is called the Church thesis.) If you add these two, then you arrive by the above conclusion.

Another reason for it not being a problem is that every (first order) axiom system has a countable model by the theorem of Lowenheim-Skolem. (Yes, even the theory of real numbers have a countable model, which is also known as the Skolem paradox.)

Actually, I don't think that the technical content of the write-up is novel, it is probably something that was already clear to Turing, Church, Goedel and von Neumann in the 40-50ies. IMO, the takeaway is a certain, more pragmatic, way of thinking about logic in the age information processing, instead of sticking to an outdated intuition. Also the explicit recognition that the domains of mathematical logic and AI are much more directly connected than it would seem naively.

Logic: the science of algorithm evaluating algorithms

6 Christian_Szegedy 22 February 2012 06:13PM

"Mathematical logic is the science of algorithm evaluating algorithms."

Do you think that this is an overly generalizing, far fetched proposition or an almost trivial statement? Wait, don't cast your vote before the end of this short essay!

It is hard to dispute that logic is the science of drawing correct conclusions. It studies theoretically falsifiable rules the lead to derivations which are verifiable in a finite amount of mechanical steps, even by machines.

Let's dig a bit deeper by starting to focusing on the "drawing correct conclusions" part, first. It implies the logic deals both with abstract rules: "drawing" and their meaning: "conclusions".

Logic is not just about mindless following of certain rules (that's algebra :P) its conclusions must have truth values that refer to some "model". Take for example De Morgan's law:

not (a and b) = (not a) or (not  b).

It can be checked for each four possible substitutions of boolean values: a = false, b = false; a = false, b = true; .... If we agreed upon the standard meaning of the logical not, or and and operators, then we must conclude that the De Morgan's rule is perfect. On the other hand: the similar looking rule

not (a and b) = (not a) and (not b)

can be easily refuted by evaluating for the counterexample a = false, b = true.

Generally: in any useful mathematical system, logical conclusions should work in some interesting model.

However, in general, total verifiability is way too much to ask. As Karl Popper pointed out: often one must be satisfied with falsifiability of scientific statements as a criterion. For example, the following logical rule

not (for each x: F(x)) <=> exists x : not F(x)

is impossible to check for every formula F.  Not directly checkable statements include all those where the set of all possible substitutions is (potentially) infinite.

This observation could be formalized by saying that a mapping from abstract to concrete is required. This thinking can be made precise by formalizing further: logicians study the connection between axiom systems and their models.

But wait a minute: is not there something fishy here? How could the process of formalization be formalized? Is not this so kind of circular reasoning? In fact, it is deeply circular on different levels. The most popular way of dealing with this Gordian knot is simply by cutting it using some kind of naive set theory in which the topmost level of arguments are concluded.

This may be good enough for educational purposes, but if one in the following questions should always be asked: What is the basis of those top level rules? Could there be any mistake there? Falsifiability always implies an (at least theoretical) possibility of our rules being wrong even at the topmost level. Does using a meta-level set theory mean that there is some unquestionable rule we have to accept as God given, at least there?

Fortunately, the falsifiability of axioms has another implication: it requires only a simple discrete and finite process to refute them: an axiom or rule is either falsified or not. Checking counterexamples is like experimental physics: any violation must be observable and reproducable. There are no fuzzy, continuous measurements, here. There are only discrete manipulations. If no mistakes were made and some counterexample is found, then one of the involved logical rules or axioms had to be wrong.

Let's squint our eyes a bit and look the at whole topic from a different perspective: In traditional view, axiom systems are considered to be sets of rules that allow for drawing conclusions. This can also be rephrased as: Axiom systems can be cast into programs that take chains of arguments as parameter and test them for correctness.

This seems good enough for the the formal rules, but what about the semantics (their meaning)?

In order to define the semantics, there need to be map to something else formally checkable, ruled by symbolics, which is just information processing, again. Following that path, we end up with with the answer: A logical system is a program that checks that certain logical statements hold for the behavior of another program (model).

This is just the first simplification and we will see how the notions of "check", "logical statement", and "holds" can also be dropped and replaced by something more generic and natural, but first let's get concrete and let us look at the two most basic examples:

  1. Logical Formulas: The model is the set of all logical formulas given in terms of binary and, or and the not function. The axiom system consists of a few logical rules like the commutativity, associativity and distributivity of and and or, the De Morgan laws as well as not rule not (not a)=a. (The exact choice of the axiom system is somewhat arbitrary and is not really important here.) This traditional description can be turned into: The model is a program that takes a boolean formulas as input and evaluates them on given (input) substitutions. The axiom system can be turned as a program that given a chain of derivations of equality of boolean formulas checks that each step some rewritten in terms of one of the predetermined axioms, "proving" the equality of the formulas at the beginning and end of the conclusion chain. Note that given two supposedly equal boolean formulas ("equality proven using the axioms"), a straightforward loop around the model could check that those formulas are really equivalent and therefore our anticipated semantic relationship between the axiom system and its model is clearly falsifiable.
  2. Natural numbers: Our model is the set of all arithmetic expressions using +, *, - on natural numbers, predicates using < and = on arithmetic expressions and any logical combination of predicates. For the axiom system, we can choose the set of Peano axioms. Again: We can turn the model into a program by evaluating any valid formula in the model. The axiom system can again be turned into a program that checks the correctness of logic chains of derivations. Although we can not check verify the correctness of every Peano formula in the model by substituting each possible value, we still can have an infinite loop where we could arrive at every substitution within a finite amount of steps. That is: falsifiability still holds.

The above two examples can be easily generalized to saying that: "A logical system is a program that checks that certain kinds of logical statements can be derived for the behavior of another program (model)."

Let us simplify this a bit further. We can easily replace the checking part altogether by noticing that given a statement, the axiom system checker program can loop over all possible chains of derivations for the statement and its negation. If that program stops then the logical correctness of the statement (or its negation) was established, otherwise it is can neither be proven nor refuted by those axioms. (That is: it was independent of those axioms.)

Therefore, we end up just saying: "A logical system is program that correctly evaluates whether a certain logical statement holds for the behavior of another program, (whenever the evaluator program halts.)"

Unfortunately, we still have the relatively fuzzy "logical statement" term in our description. Is this necessary?

In fact, quantifiers in logical statements can be easily replaced by loops around the evaluating program that check for the corresponding substitutions. Functions and relations can be resolved similarly. So we can extend the model program from a simply substitution method to one searching for some solution by adding suitable loops around it. The main problem is that those loops may be infinite. Still, they always loop over a countable set. Whenever there is a matching substitution, the search program will find it. We have at least falsifiability, again. For example, the statement of Fermat's Last Theorem is equivalent to the statement that program the searches for its solution never stops.

In short: the statement "logical statement S holds for a program P" can always be replaced by either "program P' stops" or "program P' does not stop" (where P' is a suitable program using P as subroutine, depending on the logical statement). That is we finally arrive at our original statement:

"Mathematical logic is the science of algorithm evaluating algorithms [with the purpose making predictions on their (stopping) behavior.]"

Simple enough, isn't it? But can this be argued backward? Can the stopping problem always be re-cast as a model theoretic problem on some model? In fact, it can. Logic is powerful and the semantics of the the working of a programs is easily axiomatized. There really is a relatively straightforward  one-to-one correspondence between model theory and algorithms taking the programs as arguments to predict their (stopping) behavior.

Still, what can be gained anything by having such an algorithmic view?

First of all: it has a remarkable symmetry not explicitly apparent by the traditional view point: It is much less important which program is the model and which is the "predictor". Prediction goes both ways: the roles of the programs are mostly interchangeable. The distinction between concrete and abstract vanishes.

Another point is the conceptual simplicity: the need for a assuming a meta-system vanishes. We treat the algorithmic behavior as the single source of everything and look for symmetric correlations between the behavior of programs instead of postulating higher and higher levels of meta-theories.

Also, the algorithmic view has quite a bit of simplifying power due to its generality:

Turing's halting theorem is conceptually very simple. (Seems almost too simple to be interesting.) Goedel's theorem, on the other hand, looks more technical and involved. Still, by the above correspondence, Turing's halting theorem is basically just a more general version Goedel's theorem. By the correspondence between the algorithmic and logical view, Turing's theorem can be translated to: every generic enough axiom system (corresponding to a Turing complete language) has at least one undecidable statement (input program, for which the checking program does not stop.) The only technically involved part of Goedel's theorem is to check that its corresponding program is Turing complete. However, having the right goal in mind, it is not hard to check at all.

Comment author: Christian_Szegedy 15 September 2010 06:51:28AM 6 points [-]

The object of life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane.

Marcus Aurelius

Comment author: rwallace 03 September 2010 12:57:16PM 5 points [-]

There are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it.

Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.

Comment author: Christian_Szegedy 08 September 2010 07:08:36AM *  2 points [-]

I agree with you. I also think that there are several reasons for that:

First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer.

The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities."

Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)

In response to Something's Wrong
Comment author: Christian_Szegedy 07 September 2010 06:26:45PM *  2 points [-]

I found the most condensed essence (also parody) of religious arguments for fatalism in Greg Egan's Permutation City:

Even though I know God makes no difference. And if God is the reason for everything, then God includes the urge to use the word God. So whenever I gain some strength, or comfort, or meaning, from that urge, then God is the source of that strength, that comfort, that meaning. And if God - while making no difference - helps me to accept what's going to happen to me, why should that make you sad?

Logically irrefutable, but utterly vacuous...

Comment author: simplicio 03 September 2010 05:22:13PM 4 points [-]

Russell is not just saying that beliefs should be proportional to evidence (if anyone on LW disagrees with THAT, I'll be shocked); he's saying that if that were done, it would eliminate most of the world's problems.

If he had said 'many' instead of 'most,' it would be a great quote. Unfortunately there is a huge class of problems that, although they may eventually be solved by rational methods, are not solved just by being rational. Turning everyone rational overnight doesn't automatically cure death, for example. Nor does it remedy the partiality of human utility functions, or cure psychopaths of their psychopathy... et cetera.

Comment author: Christian_Szegedy 03 September 2010 06:05:47PM *  3 points [-]

You should not take the statement too literally: Look it in a historical context. Probably the biggest problems at Russel's time were wars caused by nationalism and unfair resource allocation due to bad (idealistic/traditionalist) policies.. Average life expectancy was around 40-50 years. I don't think anyone considered e.g. a mortality a problem that can or should be solved. (Neither does over 95% of the people today). Population was much smaller. Earth was also in a much more pristine state than today.

Times have changed. We have more technical issues today, since we can address more issues with technology, plus we are on a general trajectory today, which is ecologically unsustainable if we don't manage to invent and use the right technologies quickly. I think this is the fundamental mistake traditional ecological movements are making: There is no turning back. We either manage to progress rapidly enough to counteract what we broke (and will inevitably break) or our civilization collapses. There is no stopping or turning back, we have already bet our future. Being reasonable would have worked 100 years ago, today we must be very smart as well.

Comment author: Jayson_Virissimo 03 September 2010 02:11:26AM 1 point [-]

I am very uncertain about the truth of the proposition, so I would like to hear arguments in favor of or against it to develop a more informed opinion.

Comment author: Christian_Szegedy 03 September 2010 06:07:20AM 1 point [-]

I just find it a bit circular that you want evidences for the assertion saying that assertions need evidences.

Comment author: LucasSloan 02 September 2010 09:35:42PM 0 points [-]

Do you think they would be back by 7 PM on the 6th?

Comment author: Christian_Szegedy 02 September 2010 11:23:47PM 0 points [-]

I'd prefer Sunday or Saturday (9/5 would work for me.)

View more: Next