I'm not sure if this post is very on-topic for LW, but we have many folks who understand Haskell and many folks who are interested in Löb's theorem (see e.g. Eliezer's picture proof), so I thought why not post it here? If no one likes it, I can always just move it to my own blog.

A few days ago I stumbled across a post by Dan Piponi, claiming to show a Haskell implementation of something similar to Löb's theorem. Unfortunately his code had a couple flaws. It was circular and relied on Haskell's laziness, and it used an assumption that doesn't actually hold in logic (see the second comment by Ashley Yakeley there). So I started to wonder, what would it take to code up an actual proof? Wikipedia spells out the steps very nicely, so it seemed to be just a matter of programming.

Well, it turned out to be harder than I thought.

One problem is that Haskell has no type-level lambdas, which are the most obvious way (by Curry-Howard) to represent formulas with propositional variables. These are very useful for proving stuff in general, and Löb's theorem uses them to build fixpoints by the diagonal lemma.

The other problem is that Haskell is Turing complete, which means it can't really be used for proof checking, because a non-terminating program can be viewed as the proof of any sentence. Several people have told me that Agda or Idris might be better choices in this regard. Ultimately I decided to use Haskell after all, because that way the post will be understandable to a wider audience. It's easy enough to convince yourself by looking at the code that it is in fact total, and transliterate it into a total language if needed. (That way you can also use the nice type-level lambdas and fixpoints, instead of just postulating one particular fixpoint as I did in Haskell.)

But the biggest problem for me was that the Web didn't seem to have any good explanations for the thing I wanted to do! At first it seems like modal proofs and Haskell-like languages should be a match made in heaven, but in reality it's full of subtle issues that no one has written down, as far as I know. So I'd like this post to serve as a reference, an example approach that avoids all difficulties and just works.

LW user lmm has helped me a lot with understanding the issues involved, and wrote a candidate implementation in Scala. The good folks on /r/haskell were also very helpful, especially Samuel Gélineau who suggested a nice partial implementation in Agda, which I then converted into the Haskell version below.

To play with it online, you can copy the whole bunch of code, then go to CompileOnline and paste it in the edit box on the left, replacing what's already there. Then click "Compile & Execute" in the top left. If it compiles without errors, that means everything is right with the world, so you can change something and try again. (I hate people who write about programming and don't make it easy to try out their code!) Here we go:

main = return ()
-- Assumptions
data Theorem a
logic1 = undefined :: Theorem (a -> b) -> Theorem a -> Theorem b logic2 = undefined :: Theorem (a -> b) -> Theorem (b -> c) -> Theorem (a -> c) logic3 = undefined :: Theorem (a -> b -> c) -> Theorem (a -> b) -> Theorem (a -> c)
data Provable a
rule1 = undefined :: Theorem a -> Theorem (Provable a) rule2 = undefined :: Theorem (Provable a -> Provable (Provable a)) rule3 = undefined :: Theorem (Provable (a -> b) -> Provable a -> Provable b)
data P
premise = undefined :: Theorem (Provable P -> P)
data Psi
psi1 = undefined :: Theorem (Psi -> (Provable Psi -> P)) psi2 = undefined :: Theorem ((Provable Psi -> P) -> Psi)
-- Proof
step3 :: Theorem (Psi -> Provable Psi -> P) step3 = psi1
step4 :: Theorem (Provable (Psi -> Provable Psi -> P)) step4 = rule1 step3
step5 :: Theorem (Provable Psi -> Provable (Provable Psi -> P)) step5 = logic1 rule3 step4
step6 :: Theorem (Provable (Provable Psi -> P) -> Provable (Provable Psi) -> Provable P) step6 = rule3
step7 :: Theorem (Provable Psi -> Provable (Provable Psi) -> Provable P) step7 = logic2 step5 step6
step8 :: Theorem (Provable Psi -> Provable (Provable Psi)) step8 = rule2
step9 :: Theorem (Provable Psi -> Provable P) step9 = logic3 step7 step8
step10 :: Theorem (Provable Psi -> P) step10 = logic2 step9 premise
step11 :: Theorem ((Provable Psi -> P) -> Psi) step11 = psi2
step12 :: Theorem Psi step12 = logic1 step11 step10
step13 :: Theorem (Provable Psi) step13 = rule1 step12
step14 :: Theorem P step14 = logic1 step10 step13
-- All the steps squished together
lemma :: Theorem (Provable Psi -> P) lemma = logic2 (logic3 (logic2 (logic1 rule3 (rule1 psi1)) rule3) rule2) premise
theorem :: Theorem P theorem = logic1 lemma (rule1 (logic1 psi2 lemma))

To make sense of the code, you should interpret the type constructor Theorem as the symbol ⊢ from the Wikipedia proof, and Provable as the symbol ☐. All the assumptions have value "undefined" because we don't care about their computational content, only their types. The assumptions logic1..3 give just enough propositional logic for the proof to work, while rule1..3 are direct translations of the three rules from Wikipedia. The assumptions psi1 and psi2 describe the specific fixpoint used in the proof, because adding general fixpoint machinery would make the code much more complicated. The types P and Psi, of course, correspond to sentences P and Ψ, and "premise" is the premise of the whole theorem, that is, ⊢(☐P→P). The conclusion ⊢P can be seen in the type of step14.

As for the "squished" version, I guess I wrote it just to satisfy my refactoring urge. I don't recommend anyone to try reading that, except maybe to marvel at the complexity :-)

EDIT: in addition to the previous Reddit thread, there's now a new Reddit thread about this post.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 4:42 PM

Hmm... so, interestingly enough, the proof is actually much simpler than both yours and the one on Wikipedia when written up in Agda.

Interestingly enough though, just the notion of provability being equivalent to proof doesn't actually lead to an inconsistency in Agda - the positivity checker prevents the declaration of a Löb sentence. But, if you admit the sentence, everything breaks in about the way you'd expect.

I've uploaded it as a gist here

EDIT 1: So I seem to have misunderstood Löb's theorem. Going to redo it and update the gist when I think it makes more sense. EDIT 2: Updated the gist, should actually contain a valid proof of Löb's theorem now

When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.

It's an easy mistake to make, because the first rule of inference for provability logic is

⊢ A
------
⊢ □ A

So it looks like (A → □ A) should be a theorem, but notice that the left-hand side of the turnstile is empty. This means that the rule is only valid when there are no assumptions in context; or equivalently in a programming language, when there are no variables in scope. Since Agda doesn't know how to talk about which Agda variables are in scope, we cannot implement this rule as an Agda function, which would be available whether the scope is empty or not.

A few examples would probably help. A valid use of inference rule 1 would look like this:

----- variable rule
A ⊢ A
--------- deduction rule
⊢ (A → A)
----------- inference rule 1
⊢ □ (A → A)

Here, we have used inference rule 1 and a few other common rules to prove that the trivial implication (A → A) is "provable". By comparison, here is what happens when we try to use the same technique to prove that all "true" statements are "provable", i.e. (A → □ A). (I put "true" and "provable" in quotes in order to distinguish "provable according to the map" from provable in the territory. Provability logic is a map we use to understand provability.)

----- variable rule
A ⊢ A
------- inference rule 1 (invalid, the context is not empty)
A ⊢ □ A
--------- deduction rule
⊢ A → □ A

As you can see, the proof is invalid. And that's a good thing, too, because if we could prove that all "true" statements were "provable", then our provability logic would have allowed us to prove something which we know to be false, so the logic would have been a poor model of real-world provability.

Anyway, the conclusion of all this is that we should not represent inference rule 1 by an Agda function from A to □ A, but by an object of type "Theorem (A ⇾ Provable A)", where "⇾" is a syntactic construct representing implication, not an Agda function.

I'm not really seeing the importance of the separate construct Theorem (A → B) vs (A → B). It seems to get manipulated in the exact same way in the code above. Is there some extra capability of functions that Theorem + postulates doesn't include?

Also, I think you're misunderstanding what a function A → B means in Agda. A → □ A doesn't mean "all true statements are provable", it means "all statements we can construct a proof term for are provable" - you don't get an element of type A to apply that to without a proof of A, so it's actually a pretty trivial implication. I'm actually tempted to say that the notion of provability is actually wholly unnecessary here because of the Curry-Howard Correspondence, and that the real issue is the construction of the fixpoint

A → □ A doesn't mean "all true statements are provable" [...]

I agree! You are right, the formula "A → □ A" does not mean that all true statements are provable. It means that all "true" statements are "provable" :)

Discussing the math of provability is tricky, because there are many notions of truth and many notions of provability. There are true-in-the-real-world and provable-in-the-real-world, the notions we are trying to model. True-in-the-real-world does not imply provable-in-the-real-world, so ideally, we would like a model in which there are no rules to infer provable-in-the-model from true-in-the-model. Provability logic provides such a model: it uses "A" for true-in-the-model, and "□ A" for provable-in-the-model.

One extra level of complexity is that "provable-in-the-model" is not the same as "provable-in-the-real-world by following the rules of the model", which I'll abbreviate as "derivable", or "PL-derivable" when the model is Provability logic. The reason the two are different is that "□ A" only represents provability; the actual rules of the model don't care what meaning we assign to the symbols. If the rules were poorly chosen, it could very well be that a formula "□ A" was PL-derivable but that the formula "A" wasn't.

A → □ A [...] means "all statements we can construct a proof term for are provable"

Yes and no. Yes, at runtime, all the values we will actually receive will have been constructed by our callers. No, Agda implications do not always mean that if the left-hand side is constructible, so it the right-hand side. Implications of the form (A → ⊥), for example, mean that the left-hand side is not constructible. In that case, we never receive any value from our callers, and we implement (A → ⊥) by writing a constructive proof that all possible inputs are absurd.

But even if we ignore this corner case, it would only be justified to conclude that receiving an A as input implies that an Agda proof for A exists. Löb's theorem is not about Agda, but about a specific model of provability encoded by the rules of provability logic. If your proof uses (A → □ A) as an assumption (as opposed to ⊢ A → ⊢ □ A), then your proof is not a proof of Löb's theorem, because that theorem makes no such assumption.

EDIT 2: Updated the gist, should actually contain a valid proof of Löb's theorem now

Your latest version at the time of writing is revision 6, which uses a postulate

postulate
  ev : ∀ {A} → A → □ A

instead of the data declaration

data □_ (A : Set β) : Set (suc β)
  ev : A → □ A

That is, you have removed the (□ A → A) direction, but kept the (A → □ A) direction. That's a bit better, but you're still using the forbidden assumption.

the real issue is the construction of the fixpoint

I agree, that part does look hard to implement.

Is there some extra capability of functions that Theorem + postulates doesn't include?

Oh yes, tons. Try constructing a value with type Theorem Integer :-) The world under Theorem is much more restricted.

A→□A doesn't mean "all true statements are provable", it means "all statements we can construct a proof term for are provable"

Well, not quite. The □ operator in the original proof doesn't mean provability from the premises of the theorem, it means provability in bare bones PA, starting from nothing. We want to only use assumptions that are true under that interpretation of □, which means they must hold in provability logic (GL). And the assumption A→□A doesn't hold in GL.

The assumptions used in the original proof are ⊢A→⊢□A and ⊢(□A→□□A), both of which hold in GL. I think the simplest way to encode these is to use separate constructs for ⊢ and □, as I did in the post.

I don't completely understand how your code corresponds to a proof of Löb's theorem. Are you assuming that ⊢(☐A→A) holds for all sentences A? In Löb's theorem it's assumed to hold only for the sentence P.

Nice!

I had the impression that in some sense the existence of the fixpoint was the core of the theorem; unless I've misunderstood, the proof here simply assumes it's given. Is it in fact straightforward (albeit inconvenient) to add the necessary fixpoint-making machinery to the proof?

You can choose how deep you want to go:

1) Assume the existence of one particular fixpoint. That's what I've done in the post.

2) Assume the existence of fixpoints for all formulas with one propositional variable, and get (1) as an immediate corollary. That's not straightforward in Haskell because there are no type-level lambdas, but it's easy in Agda.

3) Actually spell out a proof of (2), which is known as the diagonal lemma. You could probably do that in Agda, but I haven't tried and haven't seen anyone try. Also it's not obvious which assumptions to use. Ideally we'd want a full account of Gödel numberings, but that would be a lot of work.