Notice that the argument still works just as well if we could compute some which had the property that for only finitely many , so this proves that in fact must grow faster than any computable function!
Not quite. It is true that must grow faster than any computable function, but what you've shown is just that there's no computable function growing at least as fast as . It's possible for functions to have incomparable growth rates, by taking turns for which is bigger infinitely many times.
In first-order logic (or any other complete logic) we have some guarantee that "every true statement is provable"
Not exactly. Every statement that's true of every model of the axioms is provable from the axioms. If you have a particular model in mind for your axioms, then there can be statements that are true in that model but not provable from the axioms. The trouble is just that the axioms can't fully specify the model.
Not quite. It is true that f must grow faster than any computable function, but what you've shown is just that there's no computable function growing at least as fast as f. It's possible for functions to have incomparable growth rates, by taking turns for which is bigger infinitely many times.
True. I overlooked this for some reason, but it's not hard to give an argument for why must grow faster than any computable function, so it's not a major problem. I've edited the post to fix this.
Not exactly. Every statement that's true of every model of the axioms is provable from the axioms. If you have a particular model in mind for your axioms, then there can be statements that are true in that model but not provable from the axioms. The trouble is just that the axioms can't fully specify the model.
That isn't different from what I said. Keep in mind that I tried to keep the post as simple as possible, and if I had to define the concept of model for people to make sense of this statement it wouldn't fit well with that attitude.
I think Rice's theorem would also be relevant to mention in these notes. It states that no nontrivial extensional property of programs is decidable. Here "extensional" means purely based on the input/output relation, and not dependent on runtime, code, or similar.
It’s incredibly disconcerting for so many brilliant thinkers to accept and repeat the circular logic about the paradoxical anti-halt machine “g” — if you make some g which contains f, then I can make f such that it detects this and halts. If you make some g which invokes f, then I can make f which detects this and halts. By definition of the problem, “f” is the outer main function and paradoxical “g” is trapped in the closure of f which would mean f can control the process, not g. The whole basis for both Gödel Incompleteness Theorems and the Halting Problem is based on this idea we can make some paradox machine that does the opposite of whatever we think, without continuing to consider if such recursive properties might be detectable.
A more difficult case would be the machines which loop forever and never hit a prior state, tape tuple, but even in this case, I truly believe the difference is ultimately decidable because an n-state Turing machine which does eventually halt after many steps would necessarily do so by some form of countdown or count up to a limit. This periodicity would presumably be apparent in the frequency domain of the state transition matrix. Also, we might try to connect the (state, tape) tuples (which are enumerable) to the complex plane, and then we could look them up on the Mandelbrot set to see if they converge or diverge. Perhaps we could pick some natural threshold of closeness to the edge of the set where we would categorize machines as being more difficult to analyze. Seems like it would be a natural connection…
(disclaimer, I’m a programmer, not a computational complexity research scientist in this area really, so I am likely wrong, I just think we ought to be wayyyy more skeptical about the bold claims of Gödel and Turing!)
WDYT?
Perhaps the most difficult problem in computer science is to gain some understanding of what a program does without having to run it. This might seem surprising to you. Why can't we simply look at the source code or some other description of the program and "see" what it does?
If so, then consider the program whose pseudocode is the following: pick some enumeration of all tuples (a,b,c,n) of natural numbers with a,b,c≥1 and n≥3. Loop over this enumeration and at each step check if an+bn=cn or not. If this condition holds at any step then exit the loop, and if it doesn't then continue. This is a simple program and we might ask a simple question about it: does this program halt or not? In other words, if we run this program on a computer without any time or memory constraints, will the program run forever or will it stop at some point?
You might've noticed that asking this question about the behavior of this simple program is actually equivalent to Fermat's last theorem, which used to be one of the most famous open problems in mathematics before it was finally proven by Andrew Wiles with an extremely complicated argument. The question might therefore appear simple, but in fact even for this simple program it's very difficult to answer.
Asking whether a program with a given source code will halt or not is the subject of the famous halting problem in computer science. The halting problem is often formulated in terms of Turing machines, which are rudimentary computers that nevertheless contain enough complexity to simulate any other computer, but the substance is unchanged if we ask the question instead of any other programming language (throughout this notebook I'll be assuming these languages are Turing complete, which is a technical condition to ensure they are "expressive enough" to do some of the constructions in the notebook). The above example might've motivated why the problem is indeed difficult, and in fact we have a proof that in general it's impossible to solve. The proof is an example of a diagonal argument and goes like this:
Suppose that given a source code S and an input I, we had a program Sh to compute the function h(S,I) which outputs 1 if the program S halts when ran on input I and outputs 0 otherwise. We could then define a new program g taking S as an input: if h(S,S)=0 then g(S) returns 0 and otherwise g(S) loops forever. If h were computable then g would be as well, so there would be a program with some source code Sg which computes g. Now we have a contradiction, since if g(Sg)=0 then this means h(Sg,Sg)=0 and therefore the program Sg doesn't halt when ran on itself as an input, contradicting the fact that g does halt on the input Sg; and if g(Sg) instead loops forever then h(Sg,Sg)=1 so that Sg halts when ran on input Sg. Regardless of which possibility is realized, g and Sg always disagree on the input Sg, so by contradiction our initial assumption must have been incorrect: there is simply no program Sg which computes the function g, and so no program Sh which computes h as well.
Busy beavers
There is an interesting implication of the unsolvability of the halting problem. Suppose that given some programming language, we consider all valid programs in that language of a given length N which take no input. There are only finitely many such programs, and out of them some of them halt and some of them don't. It's a trivial consequence of the finiteness of the number of these programs that among the ones that do halt, there will be a program that maximizes some metric of how long the program runs for: basic instructions executed on the processor, amount of times the program makes changes to memory, the maximal number of consecutive memory cells which have their values changed from the original blank one, et cetera. We can then define a function f(N) which takes N to this maximal number. Different choices of criteria and programming languages give different functions f, but the spirit of the construction is the same for all of them. Programs which are defined in this way are called busy beavers: they do the "most work possible" for a program of their length given the constraint that they must not run forever.
Now, suppose that we work with any programming language you like and we fix some compiler for it and some processor architecture which is going to run the compiled program, and we define f(N) as the maximal number of steps a program of length N can take before halting. This is called a busy beaver function, and it's an immediate consequence of the unsolvability of the halting problem that it's uncomputable. Indeed, if we had a program to compute f, then given any program-input pair (S,I) we could decide if it halts or not by simply defining a new program T which feeds S with the input I, and then we could simulate T for f(length(T)) steps. If T halts at all then it halts in at most f(length(T)) steps, so if it hasn't halted by then we know it will never halt. We know that in fact there's no program which solves the halting problem, so this argument implies there's also no program which computes f. Notice that the argument still works just as well if we could compute some g which had the property that g(n)<f(n) for only finitely many n, so this proves that there is no computable function which grows faster than f.
In fact we can also show that f grows faster than any computable function as follows: if g is any computable function and a program computing it has length c, then we can define a program of length c+O(logN) which executes an empty loop g(N)2 times (how?). This implies f(c+O(logN))≥g(N)2, and this bound is enough to show that f grows faster than g.
These conclusions hold in general for all busy beaver functions, but in practice we want to eliminate the ambiguities in the definition coming from all the different choices we have to make, so we often define the busy beaver function S(N) or BB(N) by maximizing the number of shifts/steps taken by a two symbol, N-state Turing machine which has an initially empty input tape. If you don't know what a Turing machine is, the specific definition is not all that important; what's important is that it's equivalent to any other "powerful enough computer" and we take it as a standard to make sure we're all talking about the same function BB.
There's something that might be even more disturbing about BB. Notice that for any quantity, we have a trivial (though very inefficient) procedure to try to compute it: we can simply pick some system of axioms, such as ZFC set theory, and loop over all proof candidates in this theory. If one such proof is valid and proves that BB(N)=X for X a natural number, say, then we halt and return X; otherwise we loop forever. We know that in fact BB is uncomputable, so this algorithm must fail, and the only way it can fail is if for large enough inputs N, ZFC set theory cannot prove any statement of the form BB(N)=X for X a natural number. In this case, we would say that ZFC cannot decide the value of BB(N). In first-order logic (or any other complete logic) we have some guarantee that "every true statement is provable", so the fact that no statement of this form can be proven means there's an actual ambiguity about the value of BB(N) in the theory, whereas in logics which are not complete the theory is simply not strong enough to prove all of its implications. In either case our procedure to compute BB fails and we discover that in some sense these values are "too big" for their values to be decided by some axiom system such as ZFC. Of course, the same idea would work for an arbitrary theory, not just ZFC.
We can then imagine that formal systems can be stratified by the largest value of the busy beaver function that they can decide. For example, we know that Peano arithmetic, which is a weaker system of axioms than ZFC set theory, can prove BB(2)=6,BB(3)=21 and BB(4)=107. We also know that at some point Peano arithmetic goes from being able to decide the values of BB to not being able to decide them. What is this threshold? In other words, what's the smallest N for which Peano arithmetic cannot decide the value of BB(N)? We don't know, but we know that for both ZFC and Peano arithmetic this N is at least 5 and at most 748 - it's been proven that if ZFC is in fact consistent (meaning that no contradiction can be derived from its axioms), it can't decide the value of BB(748). The idea of the proof is to simply construct a Turing machine that searches for an inconsistency in ZFC set theory and then make it as small as possible. By Godel's second incompleteness theorem we know if ZFC is consistent then it can't prove its own consistency, so it also can't prove that this Turing machine doesn't halt and hence it can't decide the value of the busy beaver function at the number of states of this machine.
An interesting question is to compare Peano arithmetic to ZFC set theory from the lens of the busy beaver function. Theories which can decide larger values of the busy beaver function are stronger in how much they can prove about ordinary arithmetic, and we know that ZFC is in fact a stronger system of axioms than PA, so the question is how would this be reflected in the values of BB they can decide? Unfortunately computing the exact threshold of decidability of the busy beaver function for either of these theories is likely an insurmountable challenge, but we can compare the best upper bounds we will be able to find. You can find the two questions about this here and here.
Bounded halting
Now that we've seen the general halting problem is impossible to solve, we can scale back our ambitions and ask for something weaker: instead of deciding whether a given program halts at all, what if we want to decide if it halts in a given number of steps?
This one is of course trivial to solve: we simply take the program and simulate it for the given number of steps to see if it halts or not. To make the question interesting we have to ask if there's a better algorithm than that, one that gives us an understanding of what the program is doing without us having to run the program. We can modify the problem like this to make it nontrivial: given a program of length N, decide if it halts in N steps for all inputs or not. The "all inputs" condition doesn't actually make the problem hopeless to solve, since if the program halts in N steps on a given input it can only read O(N) bits of the input (meaning at most some constant independent of N times N), so with trial and error we could solve this problem in exponential time: we simply try all possible inputs of length O(N) and for each input we simulate the program for N steps to see if it halts or not. The exponential time is because there are 2O(N) possible inputs that can be expressed using N bits - an exponential function of N. This is the bounded halting problem.
The question we could ask is whether the bounded halting problem can be solved efficiently or not. The bounded halting problem is in the complexity class NP: it's a problem that could be solved by a nondeterministic Turing machine in polynomial time. This is because if we're actually given a candidate input, we can check if the program runs for more than N steps on that input or not by simply simulating the program for N steps on that input, which takes only N steps: it's not just polynomial time, but in fact linear time in this case. The difficulty is that if we think of inputs as given to the program in binary, for example, then there are 2O(N) possible inputs we have to check like this.
The famous P = NP problem of computer science is about problems of this nature: if we can check a candidate solution to a problem efficiently, in other words if the problem is in the complexity class NP; does that also mean we can solve the problem efficiently, in other words is the problem also in the complexity class P? If the answer is yes, then we can obviously solve the bounded halting problem efficiently.
It's also possible, though somewhat more difficult, to see that if we can solve the bounded halting problem in polynomial time then this would imply P = NP. To see this, imagine we have an arbitrary NP problem and a program M(input,solution) which verifies solutions to this problem for a given input in polynomial time, say bounded from above by a polynomial p(N) in the size N of the input. Now consider the program defined by S(solution)=0 if M(input,solution)=1 (in other words if the solution is valid for the given input) and S loops forever otherwise. S is a program of length polynomial in N and its runtime is bounded from above by p(N)+C for some constant C≥0. If the length of S is less than p(N)+C then we add some padding to increase its length without changing its functionality.
Now, if we have a polynomial time solution to the bounded halting problem, we can simply give S as an input to our solution and see if it halts in length(S) steps or not. Since length(S) is a polynomial in N this computation would also take polynomial time in N, and since length(S)≥p(N)+C by assumption if the program halts at all it halts in p(N)+O(1) steps. Since checking if S halts is equivalent to checking if the original problem has a solution, we've constructed an algorithm for solving any NP problem in polynomial time given that we can solve the bounded halting problem. In this situation we would say that the bounded halting problem is NP-complete, which means it's among the hardest problems in the class NP: any other NP problem can be reduced to the bounded halting problem.
The P = NP problem is of course the most famous open problem in computer science and it's one of the Clay Institute's Millennium Prize Problems. This discussion likely clears up why that's the case: P = NP is the natural "bounded" analog of the halting problem and a solution to it would imply that we can somehow efficiently gain understanding of what arbitrary programs are doing without having to simulate them on input data. This is most likely false, but if it were true it would revolutionize our understanding of how computer programs work. If the discussion so far has seemed too abstract; consider that finding short proofs of mathematical theorems, cracking most of online cryptography and training neural networks efficiently are all problems in NP. If you could find a good enough solution to the bounded halting problem you could do all of these and more.
Richard Borcherds gives 10% odds that P = NP and Scott Aaronson gives 2% to 3% odds. The Metaculus community currently agrees with Aaronson and I also think that his odds are in the right ballpark. What do you think? You can see the community forecast on this question here.
Finally, there is a question about when P = NP is going to be settled. The problem was formally defined in 1971, so we could say that it has been open for around 50 years at this point. The inside view is that, as with the Riemann hypothesis and some of the other Millennium Prize Problems, there's no particular reason to believe we'll have a solution anytime soon. The difficulty is that P = NP is most likely false, and if it's false we have to prove a lower bound on the time complexity of all possible solutions to (say) the bounded halting problem. In general it's very difficult to prove nontrivial lower bounds on the time complexity of a problem and so there has been very little progress on actually disproving P = NP, and this state of affairs appears unlikely to change anytime soon. The community forecast for this is available here.
Conclusion
Variations on the halting problem are of great theoretical interest in computer science, and assuming that we can solve them with good enough algorithms they are in fact of great practical interest as well. The basic question is, as stated at the beginning of the essay, whether we can gain some understanding of what computer programs are doing without having to simulate them. The original halting problem means we can't hope to do this in general, but even a little bit of understanding that's not sufficient to solve the halting problem could revolutionize the field of computer science.