Posts

Sorted by New

Wiki Contributions

Comments

Screw set theory. I live in the physical universe where when you run a Turing machine, and keep watching forever in the physical universe, you never experience a time where that Turing machine outputs a proof of the inconsistency of Peano Arithmetic. Furthermore, I live in a universe where space is actually composed of a real field and space is actually infinitely divisible and contains all the points between A and B, rather than space only containing a denumerable number of points whose existence can be proven from the first-order axiomatization of the real numbers. So to talk about physics - forget about mathematics - I've got to use second-order logic.

Supposing that physics is computable, this isn't necessarily true. For example, if physics is computable in type-theory (or any language with non-standard models), then the non-standard-ness propagates not just to your physics, but to the individuals living in that physics which are trying to model it. Consider the non-standard model of the calculus of inductive constructions (approximately, Martin-Löf type theory) in which types are pointed sets, and functions are functions between sets which preserve the distinguished point. Then any representation of a person in space-time you can construct in this model (of the computation running physics) will have a distinguished point. Since any function to or from such a person-located-in-space-time must preserve this point, I'm pretty sure no information can propagate from the the person at the non-standard time to the person at any standard time. So even if you expect to observe the turing machine to halt whenever you find yourself in a non-standard model, you should also expect to not be able to tell that it's non-standard except when you're actively noticing its non-standardness. Furthermore, since functions must preserve the point, you should expect (provably) that for all times T, the turing machine should not be halted at time T (because the function mapping times T to statements must map the non-standard point of T to the non-standard (and trivial) statement).

For a more eloquent explanation of how non-standard models are unavoidable, and often useful, see The Elements of an Inductive Type.

The univalence axiom has nothing to do with pinning down models (except perhaps that if you are talking about models internally, it says that you're pinning them down up to isomorphism rather than up to some notion of syntactic equality). Non-standard models are essential for getting an intuition about what is and isn't provable; most of the arguments that I've seen for unexpectedly interesting terms involve picking a non-standard model, finding a point in that model with unexpected properties (e.g., something that is false in the standard model), and then construct that point. See, for example, this post, which constructs a nonstandard point on the circle, one for which you can't exhibit a path from it to the basepoint of the circle. More generally, see The Elements of an Inductive Type for a good explanation of why non-standard models are inescapable.

"An algebraic proof is a series of steps that you can tell are locally licensed because they maintain balanced weights", but it seems like an obvious direct specialization of "syntactic implication should preserve semantic implication"

Eliezer, I very much like your answer to the question of what makes a given proof valid, but I think your explanation of what proofs are is severely lacking. To quote Andrej Bauer in his answer to the question "Is rigour just a ritual that most mathematicians wish to get rid of if they could?", treating proofs as syntatic objects "puts logic where analysis used to be when mathematicians thought of functions as symbolic expressions, probably sometime before the 19th century." If the only important thing about the steps of a proof are that they preserve balanced weights (or semantic implication), then it shouldn't be important which steps you take, only that you take valid steps. Consequently, it should be either nonsensical or irrelevant to ask if two proofs are the same; the only important property of a proof, mathematically, validity (under this view). But this isn't the case. People care about whether or not proofs are new and original, and which methods they use, and whether or not they help give a person intuition. Furthermore, it is common to prove a theorem, and refer to a constant constructed in that proof. (If the only important property of a proof were validity, this should be an inadmissible practice.) Finally, as we have only recently discovered, it is fruitful to interpret proofs of equality as paths in topological spaces! If a proof is just a series of steps which preserve semantic implication, we should be wary of models which only permit continuous preservation of semantic implication, but this interpretation is precisely the one which leads to homotopy type theory.

So while I like your answer to the question "What exactly is the difference between a valid proof and an invalid one?", I think proof-relevant mathematics has a much better answer to the question you claimed to answer "What exactly is a proof?"

(By the way, I highly reccomend Andrej Bauer's answer to anyone interested in eloquent prose about why proof-relevant mathematics and homotopy type theory are interesting.)