The Gödel incompleteness theorems are notorious for being surprising. I mean, they did put a hard stop to Hilbert's programme of completely, thoroughly formalizing mathematics.

And most laypeople find them very difficult to wrap their heads around.

It is, however, my belief that nothing should ever be unintuitive. That intuition, like conscious beliefs, can be trained. The territory is never weird, but your brain certainly can be.

My goal is not to give an intuitive explanation of the actual proof, nor even the actual statements, but to give you intuitive evidence that the statements make sense. The core idea is actually from E. T. Jaynes' Probability Theory, I'm only fleshing it out.

First, let's look at the statement of the second theorem, as cited on Wikipedia:

Assume F is a consistent formalized system which contains elementary arithmetic. Then

That is, does not prove its own consistency.

Let's unwrap some of the definitions.

By "formalized system", we mean a bunch of statements that you could (assuming the Church-Turing thesis) fully enumerate, either with a computer, or, with a bit of patience, pencil and paper. Mostly you have a bunch of starting axioms and a few rules to produce more statements from them, which means you can just apply all of them in turn to produce all of the statements.

By a "consistent" system, we mean one that's not completely insane. It doesn't believe anything contradictory. This has wider implications. In fact, if your system proves just one contradiction, it actually proves everything! Why? Because if you once tell a lie, the truth is ever after your enemy.

And so, suppose we have a system -- no, a guy, standing in front of you, who is currently telling you that he is not, in fact, insane. Perhaps he tells you that he believes in the latest conspiracy theory surrounding Ghislaine Maxwell and Wayfair, and that you must also believe in it, since he trusts his own epistemic process (he believes that he is not, in fact, insane).

Naturally, you would be skeptical. An insane person can very well not realize that they are insane. Perhaps the guy has a habit of believing in absolutely everything, even contradictory things (he's a pretty inconsistent dude), in which case of course he would in particular believe that he's sane, though if you press him he might also tell you that he's lost his marbles (perhaps by digging too deep into a certain paranormal imageboard).

Well, now you understand Gödel's second incompleteness theorem. Since an inconsistent formal system would prove its own consistency anyway, the fact that would not be evidence that F is in fact consistent.

EDIT: I realize that this is not, directly, evidence that cannot in fact prove its own consistency. However, this argument should fully dissolve the question, as it then becomes completely meaningless for to say it believes in itself. Still, if is sufficiently self-aware, it should recognize that a model of itself should not be trusted either, and so it cannot prove its own consistency.


Now, let's look at the statement of the first theorem, again as cited on Wikipedia:

Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

This is rather simple, now that we have the second incompleteness theorem.

First off, since system can perform arithmetic, it can do anything a computer can do, or a mathematician with pencil, paper, and too much time on their hands (an infinite amount, in principle). And since we assumed that can be worked out by a computer, it can work itself out.

You know what else is capable of self-reference? You, dear reader.

Well, suppose then that you want to be sane, that you want to trust your epistemic process. Then, naturally you can't believe that you're insane. And yet you can't believe that you're sane either (who would believe you after they've learned about the second incompleteness theorem? Have some modesty!), for if you trust yourself too much, you risk becoming complacent. And thus you are doomed to forever doubt whether or not you are sane.

And therefore, you can neither believe nor disbelieve in your sanity.

For system , obviously it can't disprove its own consistency, since we do expect its model of itself to be accurate. And yet we just saw above that it can't prove its own consistency either, and so the statement of its own consistency is internally undecidable.


The general strategy above has been to cross the subject-object distinction. The "subject" bit is the part of your brain which deals with people, and the "object" bit the part which deals with objects and machines.

People often (claim to) use only one of them depending on the task. But why not make them join forces?

We find it amusing when mathematical objects are framed as being people -- however, when people are framed as being machines, it is generally perceived to be absolutely horrifying. I say, let yourself be more often both amused and horrified.

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 10:28 AM

I am probably confused about this, but seems to me like you made a switch between "a system cannot prove its own consistency" and "a system could prove its own consistency, but we shouldn't trust it anyway for outside-of-argument reasons".

You are correct. Maybe I should have made that clearer.

My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn't even believe it if it claimed to be consistent).

By the same logic, shouldn't we distrust it even if it proves "2 + 2 = 4"? I mean, it's sanity is already suspect, and we know that insane systems could have a wrong opinion on all kinds of math problems, so there is no reason to trust this specific proof.

And I feel pretty sure that Gödel didn't have "also, no formalized system can prove that 2+2=4" in mind.

Related question: what about systems proving other systems to be correct? Is that simply an equivalent of saying "I 100% believe this other guy to be sane (although ignorant about some facts of the world that are known to me, such as his sanity)"? Which for the external observer would simply mean "if A is sane, B is sane, too".

But we can totally prove it to be consistent, though, from the outside. Its sanity isn't necessarily suspect, only its own claim of sanity.

If someone tells you something, you don't take it at face value, you first verify that the thought process used to generate it was reliable.

I thought Gödel's incompleteness theorem was about statements that are "beyond truth and untruth" such as:

This statement is false.

or its more sophisticated non-self-referential cousin:

Let S be "Let S be *; the statement you get when in S you replace the asterisk by another copy of S in quote marks is false."; the statement you get when in S you replace the asterisk by another copy of S in quote marks is false.

in the sense that there are perfectly good reasons why a system could not classify any of them as true or false without running into contradiction. (Let's call them "trick statements". Furthermore, it is impossible to distinguish algorithmically which statements are trick statements, so even allowing the system to answer "no comment" on trick statements would still not solve the problem.)

Maybe there is a relation between this and what you wrote, but I don't intuitively see it.

There are two theorems. You're correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar's paradox, and then the second is proved by repeating the proof of the first internally.

However I chose to take the reverse route for a more epistemological flavour.