In this post, I'll try and put cousin_it's bounded version of Gödel's theorem on a rigorous footing. This will be logic-heavy, not programming heavy. The key unproved assumptions are listed at the bottom of the post.

Notes on terminology:

  • T is our deductive system.
  • ⊥ is a canonical contradiction in T. ¬A is defined to be A→⊥.
  • T ⊢ A means that T proves the proposition A.
  • T ⊢j A means that T proves A in a proof requiring no more than j symbols.
  • □ A is the statement "there exists a number that is an encoding in Gödel numbering of a valid proof in T of A".
  • n A is the statement "there exists a number that is an encoding in Gödel numbering of a valid proof in A with no more than n symbols in the proof".

The first needed fact is that there is a computable function g such that if T ⊢n A, then T ⊢g(n) □n A. In informal language, this means that if T can prove A in j symbols, it can prove that it can do so, in g(j) symbols. This g will be the critical piece of the infrastructure.

Now, these arguments work with general definitions of proofs, but I like to put exact bounds where I can and move uncertainty to a single place. So I will put some restrictions on the language we use, and what counts as a proof. First, I will assume the symbols "(", ")", "j", and "=" are part of the language - ( and ) as parentheses, and j as a free variable.

I will require that proofs start and end with parenthesis. A proof need not state what statement it is proving! As long as there is a computable process that can check that "p is a proof of A", p need not mention A. This means that I can make the following assumptions:

  • If p is a proof of A→B and q a proof of A, then (pq) is a proof of B (modus ponens).
  • If p is a proof of A→B and q a proof of ¬B, then (pq) is a proof of ¬A (modus tollens).
  • A proof that A↔B will be treated as a valid proof of A→B and B→A also.
  • Well-formed sentences with free variables can also have proofs (as in "j=0 or j>0" and "j>3 → j2>9").
  • If A and B have free variable j, and p is a proof that A→B, then (p(j=n)) is a proof that A(n)→B(n).

For a number n, l(n) is defined as the minimum number of symbols in the language of T needed to define n; generally l(n) is proportional to log(n).

R will be defined using the diagonal lemma, with one extra free variable j, as:

  • T ⊢a   R ↔ (□j R → )

In other words, R(n) claims that there is no proof of length n of R(n). Then:

  • T ⊢   □j R → □j+a+2 (□j R → )

Here T simply starts with a proof of length j of R, adds the preceding proof of length a that R implies (□j R → ⊥), plus two extra parentheses, and thus generates a proof of length j+a+2 of (□j R → ⊥). This is an algorithmic process whose validity T can establish.

  • T ⊢   □j+a+2 ((□j R) → )    →    (□g(j) (□j R) → (□g(j)+j+a+4 ))

This implies that if we can prove in j+a+2 symbols that a certain expression (here (□j R)) implies ⊥, then a proof of that expression in any specific number of symbols (here g(j)) will give you a proof of ⊥ in g(j)+j+k+2 symbols. That longer proof will simply be the proof of (□j R) → ⊥ (length j+k), followed by the proof of □j R (length g(j)), with starting and terminating parenthesis (two more symbols). That whole process was an algorithm, and T can show that it works.

We can put this together with the previous theorem, to get:

  • T ⊢    □j R →  (□g(j) (□j R) → (□g(j)+j+a+4 ))

Now we must use:

  • T ⊢   □j R → □g(j) (□j R)

This is just the fomalisation of the fact that T can prove that it can check a proof of length j in g(j) steps. Putting that together with the previous expression gives:

  • T ⊢b   □j R →  □g(j)+j+a+4 

Here b is some fixed constant that, critically, doesn't depend on n (because we haven't even seen n yet!) Now we are finally ready to insert n instead of j.

  • T ⊢b+l(n)+6   □n R(n) →  □g(n)+n+a+4 

This is simply the proof above, followed by the expression (j=n), of 4+l(n) symbols, and two more surrounding brackets. Now we add the critical assumption: that there exists a proof of length c that there is no proof of contradiction in T, up to length d:

  • T ⊢c   (□)  → 

For the argument to work, we will require c ≤ n-(b+l(n)+6)-(a+l(n)+10) and d = g(n)+n+a+4. Then putting the above two statements together (I seem to say that a lot), and remembering that A → ⊥ is the definition of ¬A and that the proof system allows direct modus tollens at the cost of two extra brackets,

  • T ⊢n-a-l(n)-8   □n R(n) →

Now, we can take the original diagonal lemma definition of R (provable in a symbols), and add in j=n, giving

  • T ⊢a+l(n)+6   R(n) ↔ (□n R(n) → )

Again combining the two, we finally reach our goal:

  • T ⊢n   R(n)

From now on, keeping track of how many steps T requires is less vital, but we'll do it anyway to see how many steps the it takes for the contradiction to emerge. As before, T can prove in g(n) steps that it can prove R(n) in n steps:

  • T ⊢g(n)   □n R(n)

Combining this line with the one where we proved ¬□n R(n) (equivalently, □n R(n) → ⊥) we get a direct modus ponens:

  • T ⊢g(n)+n-a-l(n)-6   

I.e. T proves a contradiction. And an exhausted QED.

So, the critical assumption that blows everything up was that there was a proof of length less than n-(b+l(n)+6)-(a+l(n)+10) that there were no contradictions in proof of T up to length g(n)+n+a+4. Neglecting the constant terms, this means that there is a proof of length n-2l(n) (with l(n) of order log(n)) of lack of contradictions up to length g(n)+n.

So how fast does the g(n) term grow? This measures how long it takes T to parse a proof of length n, checking validity. I would hazard a guess that g(n) is no worse that quadratic in n, and possibly of order nlog(n) or linear in n. It depends partially on the efficiency of the proofs we allow. If our proofs can call any previously proved statement in the whole of the proof structure, the complexity of checking may be n2 or nlog(n). If our proofs are less efficient and longer - only using modus ponens or tollens on adjacent expressions bracketed together, and hence needing a lot of pointless repetition - then the parser has an easier job, and probably can check the proof in a linear amount of steps.

Assumptions still needing fully rigorous proofs:

  • Counting the number of symbols in Gödel numbering of a proof is a primitive recursive operation.
  • There is a computable function g such that if T ⊢n A, then T ⊢g(n) □n A, and T can prove this.
  • Diagonalisation works with an extra free variable.
  • My definition of what constitutes a proof is not problematic.
New Comment
4 comments, sorted by Click to highlight new comments since: Today at 10:31 AM
[-]bogus12y140

Are these results actually novel? Proof complexity is a fairly well-established field in logic, and related results definitely exist, starting with Gödel's own Speedup theorem (which informally states that "simple" theorems exist with very long proofs, and that such proofs can be shortened by shifting to a more powerful system.)

[-][anonymous]12y200

I've been searching google scholar and haven't found these exact results. Most of the related papers measure proof length in terms of steps instead of symbols.

Exceptions which I'm still reviewing are

P. Pudlágk, "On the length of proofs of finitistic consistency statements in first order theories": www.math.cas.cz/~pudlak/fin-con.pdf

R. Parikh, "Existence and feasibility in arithmetic", which I can't find, and

S. Buss, "Bounded Arithmetic, Proof Complexity and Two Papers of Parikh": www.math.ucsd.edu/~sbuss/ResearchWeb/parikh/paper.pdf , which reviews the previous Parikh paper and mentions some derived work.

Your first link seems to be almost exactly what I wanted, and the bounded Löb's theorem should be a simple corollary. Thanks! This really simplifies the task of explaining Löbian cooperation =)

Cool, thanks for the links!