Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bounded versions of Gödel's and Löb's theorems

32 Post author: cousin_it 27 June 2012 06:28PM

While writing up decision theory results, I had to work out bounded versions of Gödel's second incompleteness theorem and Löb's theorem. Posting the tentative proofs here to let people find any major errors before adding formality. Any comments are welcome!

Bounded Gödel's theorem

Informal meaning: if a theory T has a short proof that any proof of T's inconsistency must be long, then T is inconsistent.

Formal statement: there exists a total computable function f such that for any effectively generated theory T that includes Peano arithmetic, and for any integer n, if T proves in n symbols that "T can't prove a falsehood in f(n) symbols", then T is inconsistent.

Proof

If the theory T takes more than n symbols to describe, the condition of the theorem can't hold, so let's assume T is smaller than that.

Let's take g(n)>>n and f(n)>>exp(g(n)). For any integer n, let's define a program R that enumerates all proofs in the theory T up to length g(n), looking for either a proof that R prints something (in which case R immediately halts without printing anything), or a proof that R doesn't print anything (in which case R prints something and halts). If no proof is found, R halts without printing anything.

Assume that the theory T is consistent. Note that if the program R finds some proof, then it makes the proved statement false. Also note that the theory T can completely simulate the execution of the program R in exp(g(n)) symbols. Therefore if the program R finds any proof, the resulting contradiction in the theory T can be "exposed" using exp(g(n)) symbols. Since f(n)>>exp(g(n)), and the theory T proves in n symbols that "T can't prove a falsehood in f(n) symbols", we see that T proves in slightly more than n symbols that the program R won't find any proofs. Therefore the theory T proves in slightly more than n symbols that the program R won't print anything. Since g(n)>>n, the program R will find that proof and print something, making the theory T inconsistent. QED.

(The main idea of the proof is taken from Ron Maimon, originally due to Rosser, and adapted to the bounded case.)

Bounded Löb's theorem

Informal meaning: if having a bounded-size proof of statement S would imply the truth of S, and that fact has a short proof, then S is provable.

Formal statement: there exists a total computable function f such that for any effectively generated theory T that includes Peano arithmetic, any statement S in that theory, and any integer n, if T proves in n symbols that "if T proves S in f(n) symbols, then S is true", then T proves S.

Proof

Taking the contrapositive, the theory T proves in n symbols that "if the statement S is false, then T doesn't prove S in f(n) symbols". Therefore the theory T+¬S proves in n symbols that "T+¬S doesn't prove a falsehood in f(n) symbols" (I'm glossing over the tiny changes to n and f(n) here). Taking the function f from the bounded Gödel's theorem, we obtain that the theory T+¬S is inconsistent. That means T proves S. QED.

(The main idea of the proof is taken from Scott Aaronson, originally due to Kripke or Kreisel.)

As a sanity check, the regular versions of the theorems seem to be easy corollaries of the bounded versions. But of course there could still be errors...

Comments (21)

Comment author: [deleted] 27 June 2012 07:53:21PM *  4 points [-]

When I get some time I'll sit down and read these more carefully. However, before that time comes I wanted to congratulate you on tackling these things!

<3

EDIT 1: Remarks on Bounded Goedel's theorem

1) It is unclear (to me, who has not studied such things in several years) where the proof uses the fact that T includes PA. Maybe it is necessary for the construction of R?

Other than that, I rewrote the proof as explicitly as I could (supplying some missing details and the like) and found no issues.

Comment author: cousin_it 28 June 2012 07:07:58AM *  2 points [-]

Thanks!

It is unclear (to me, who has not studied such things in several years) where the proof uses the fact that T includes PA. Maybe it is necessary for the construction of R?

Saying that T includes PA is shorthand for the usual Gödelian requirement that T includes the basic arithmetical truths necessary to formalize the bits of common-sense reasoning used in the proof, like simulating the execution of R.

Comment author: [deleted] 28 June 2012 08:11:20AM 2 points [-]

I figured as much; after all, in the standard proof one only needs PA around long enough to construct Goedel numbers.

Comment author: twanvl 27 June 2012 08:34:57PM 7 points [-]

Some comments

Also note that the theory T can completely simulate the execution of the program R in exp(g(n)) symbols. Therefore if the program R finds any proof, the resulting contradiction in the theory T can be "exposed" using exp(g(n)) symbols.

I find this statement too vague, what does simulation mean? Does it mean that T can prove in exp(g(n)) symbols that "R() == x" where x = R()? On what do you base the claim that you need exp(g(n)) symbols? I assume the intent is to bound the number of steps in the execution of R?

there exists a total computable function f such that .... For any integer n, let's define a program R ...

For this to work, the Godel number of R needs to be a computable function of n.

Let's take g(n)>>n

Why do you require g(n) to be much larger than n? How much larger does it need to be? Why not take g(n)>n, or even g(n)=n?

Assume that the theory T is consistent. Note that if the program R finds some proof, then it makes the proved statement false, therefore R won't find any proofs.

Don't you need soundness for that instead of consistency?

I am now trying to formalize the proof, to aid my own understanding.

Comment author: cousin_it 27 June 2012 08:44:24PM *  6 points [-]

Thanks for the very detailed comments, as usual!

I find this statement too vague, what does simulation mean? Does it mean that T can prove in exp(g(n)) symbols that "R() == x" where x = R()? On what do you base the claim that you need exp(g(n)) symbols? I assume the intent is to bound the number of steps in the execution of R?

Yes, that's the intent. R checks exp(g(n)) proofs, and checking each proof is less than exponential, so it all gets folded up into the exponential.

For this to work, the Godel number of R needs to be a computable function of n.

Are you sure? It seems to me that R is just something that gets used in the proof, only f(n) and g(n) need to depend on n... In any case, R depends on T, not just n. Of course the size and runtime of R need to be bounded by functions of n, and they are.

Why do you require g(n) to be much larger than n? How much larger does it need to be? Why not take g(n)>n, or even g(n)=n?

Because there are also things like going from "R won't find proofs" to "R won't print anything", which may have constant or linear costs, but not more.

Don't you need soundness for that instead of consistency?

For that statement, yes. But it's not really used in the proof, so I removed it =) Thanks!

For the next statement I only need consistency, because I build a contradiction between two proofs in T (the proof that gets found by R, and the proof by simulation of what R ends up doing).

Comment author: Stuart_Armstrong 28 June 2012 03:22:41PM *  2 points [-]

We should be able to make the bounds smaller... Let a be small (in comparison to n), and let P be the proof of length n that "T can't prove a falsehood in f(n) symbols" is true.

Then define R as above, except it searches through all proofs beginning with P, of length n+a. Then T can completely simulate the execution of R in slightly over (n+a)exp(a) symbols. Then the same reasoning would establish that we only need to take f(n) >> (n+a)exp(a) in order for the reasoning to work.

Not sure how large a should be. It might be a constant, but more likely it grows logarithmically with n (as it probably has to include the definition of n itself). Which would mean f(n) >> n^2.

If this argument works, we might be able to get even better bounds restrictions - maybe we restrict R to look at proofs starting with P, which then include the definition of n somewhere after. Or maybe R just looks at a single proof - the actual proof that R printing anything is impossible? That would reduce to f(n) >> n.

Comment author: cousin_it 28 June 2012 03:36:37PM *  2 points [-]

Or maybe R just looks at a single proof - the actual proof that R printing anything is impossible?

Yeah, if you want to make the bounds smaller, something like that might be the way to go. Paul Christiano described an explicit Loebian construction along these lines in an email on Apr 1, maybe you could repurpose that? I'm tired right now, so I guess I'll stick with the overkill bound :-)

Comment author: [deleted] 28 June 2012 04:37:44AM *  2 points [-]

This comment was stupid. The symbol papermachine refered to was T ⊢_n P or maybe [n]P.

Comment author: [deleted] 28 June 2012 05:09:52AM 2 points [-]

Heh, I used the same symbol to mean "T proves in n symbols P".

Comment author: cousin_it 28 June 2012 08:42:17AM *  1 point [-]

By the way, how's your work on Löb's theorem going? I'd really like to see something.

Comment author: Kindly 28 June 2012 12:00:32AM 2 points [-]

Since f(n)>>exp(g(n)), and the theory T proves in n symbols that "T can't prove a falsehood in f(n) symbols", we see that T proves in slightly more than n symbols that the program R won't find any proofs.

Could you elaborate on this? What is the slightly-more-than-n-symbols-long proof?

Comment author: Zetetic 28 June 2012 01:01:44AM *  2 points [-]

As I understand it, because T proves in n symbols that "T can't prove a falsehood in f(n) symbols", taking the specification of R (program length) we could do a formal verification proof that R will not find any proofs, as R only finds a proof if T can prove a falsehood within g(n)<exp(g(n)<<f(n) symbols. So I'm guessing that the slightly-more-than-n-symbols-long is on the order of:

n + Length(proof in T that R won't print with the starting true statement that "T can't prove a falsehood in f(n) symbols")

This would vary some with the length of R and with the choice of T.

Comment author: cousin_it 28 June 2012 12:27:12AM *  2 points [-]

"If R found a proof, then two proofs would exist: a proof of length g(n) found by R, and a proof by simulation of length exp(g(n)) that R does the opposite thing. Together they would yield a proof of falsehood in less than f(n) symbols. But I know that T can't prove a falsehood in less than f(n) symbols. That's a contradiction, so R won't find a proof."

That reasoning is quite short. The description of R is bounded by the description of T, which is not longer than n symbols. The proof that "T can't prove a falsehood in f(n) symbols" has n symbols by assumption. So overall it should be no more than linear in n. Does that make sense, or have I missed something?

Comment author: Kindly 28 June 2012 12:49:47AM 1 point [-]

How do we know what the proof by simulation does in exp(g(n)) symbols, if the length of our argument is linear in n?

Comment author: cousin_it 28 June 2012 01:16:51AM *  1 point [-]

Exhibiting the complete proof by simulation is not necessary, I think. It's enough to prove that it exists, that its length is bounded, and that it must contradict the proof found by R. All these statements have simple proofs by inspection of R.

Comment author: AlexMennen 27 June 2012 11:44:54PM 2 points [-]

Bounded Löb's theorem seems to provide an easier way to do what you did in AI Cooperation in Practice. Or are they essentially the same thing?

Comment author: cousin_it 28 June 2012 12:28:41AM *  2 points [-]

Yeah, this post is groundwork for a better version of "AI cooperation in practice". See here for more. The original version had a much more complicated proof of a similar theorem, following the steps of Löb's original proof (aka Eliezer's picture proof) instead of Kripke's argument. I didn't really understand Kripke's argument back then.

Comment author: Stuart_Armstrong 28 June 2012 12:14:43PM *  1 point [-]

I like (apart from the use of T and S to denote very different things). EDIT: and R!

One question: any idea how long it would take T to prove S under this reasoning?

Comment author: cousin_it 28 June 2012 12:33:34PM *  0 points [-]

I think the proof would be about g(n) symbols long, which is basically linear in n. Since R finds a proof shorter than g(n), exhibiting that proof yields a slightly longer proof that R stops early (because R either finds the exhibited proof, or stops even earlier). Then it's easy to deduce that T+¬S is inconsistent, which can be converted into a proof of S from T.

Comment author: AustinLorenz 11 June 2013 01:14:39AM 0 points [-]

I'm not sure if this post is obsolete, but it was linked to by a recent post, so I will provide feedback.

The translation to the formal version goes from "if a theory T has a short proof that any proof of T's inconsistency must be long, then T is inconsistent" to "if T proves in n symbols that 'T can't prove a falsehood in f(n) symbols", then T is inconsistent.'"

But inconsistency is not the same as proving a falsehood. Assuming that PA is consistent, the theory PA+~Con(PA) is consistent, since PA cannot prove its own consistency, but this consistent theory proves the false theorem ~ConPA.

Also, there exists no truth predicate Tru(n) within any consistent extension of PA--otherwise, we could formulate a liar paradox. Hence the notion of a "falsehood" is not expressible in our formal system.

In the proof, we define a program R which enumerates the theorems of the theory T that can be proved with g(n) symbols or less. Such a program must describe the formal system T to utilize its axioms and rules of inference. Assume that T is describable using t symbols. Then the program R requires log g(n) bits to describe the halting condition and t bits to describe T, so we can write the program R using around log g(n) + t bits.

This means that when R is searching for proofs about its behavior of length less than g(n), some of the g(n) symbols will have to describe R itself using log g(n) + t bits. This means that if R does not find a proof, it has only established that there is no proof in T that R halts or not of g(n) - (log g(n) + t) symbols. There may well exist a proof in T that R halts or not of g(n) symbols.

Now T is supposed to prove in n symbols that "T can't prove a falsehood in f(n) symbols," but n << g(n) and exp g(n) << f(n). This implies that log f(n) >> n, which means that n symbols do not suffice to describe a number of size f(n). Thus it is impossible to prove that "T can't prove a falsehood in f(n) symbols" using n, or slightly more than n, symbols.

Comment author: cousin_it 11 June 2013 11:08:30AM 0 points [-]

By "proving a falsehood" I meant proving that 0=1. That's how people usually formalize something like Con(PA).