Review

Today, I'll try to introduce certain things in this sequence I will make that I found interesting of computability and complexity theory is, and in particular show how many problems can be solved by computers, in both a philosophical and practical sense. I will mostly not be reviewing the history of computability theory and complexity theory, except as a brief aside.

Along the way, I'll introduce some commentary of my own, so this will at least be somewhat interesting, as it would be boring to only talk about it, without interacting with the subject.

I will mostly be using the terms computability and complexity theory for historical reasons, but note that this will not correspond to decidable and practically solvable problems always, and we must always pick a machine and constraints on the machine before we can talk about whether a problem is solvable at all.

Things I will cover include:

  1. Why the definition of decidable problems is machine and constraint dependent, and why picking different machines will get you different results in what you can compute, and why picking different time and space requirements will affect the sets of things you can compute.

  2. Why Hilbert was right, in a manner of speaking, to hope for an algorithm that could decide at least major parts of mathematics, and why Church and Turing's thesis that the Turing Machine could capture every computer and algorithm was false.

  3. Why Cobham's thesis that the practical problems we can solve correspond to P in complexity theory in our specific universe is unlikely to be true, because of the Time Hierarchy theorem as well as issues with the constant potentially being arbitrarily large.

  4. What philosophy can gain from understanding complexity and computability theory, and addressing Aaronson's paper to point out where it goes right and wrong.

  5. The underappreciated amount of things a Universal/Complete Turing Machine can simulate, and that it can actually emulate a halting oracle, even if it couldn't solve the halting problem on it's own, and the implications of that observation. Equivalently, I'm trying to say that computability and simulatability are different concepts, and one can simulate something without you being able to solve the computational problem it's doing.

  6. What problems could we solve over the very long future, and what would we need to be able to do or know in order to solve difficult problems like the halting problem or the TQBF problem.

  7. And much more!\

What the Church-Turing Thesis will look like in the sequence

This has been a common question, and some people have disputed my picture of the Church-Turing Thesis, because they disagreed with me around what the problem was, and especially both the constraints and whether it was a universal claim, or an existential claim.

For the purposes of this sequence, it will be defined the following way:

For the purposes of the Church-Turing Thesis, I will define the claim by Church and Turing like this, to translate what I think they wanted to conjecture.

The Church-Turing Thesis/Conjecture is captured by the first point below:

The Universal Turing Machine, or equivalently a human with unlimited time and memory can compute all the functions that are calculable by effective functions, as defined below.

Effectively calculable functions have the following constraints/rules:

2.1 They must always give an answer in finite time, but the time bound can be arbitrarily large, and the space required to solve a problem may be unbounded, but never infinite, just arbitrarily large.

2.2 The human is assumed to have unlimited paper to act as a memory, and unlimited time to compute (They are immortal.)

2.3 The original condition has been replaced by the following condition below from Rafael Harth:

In this setting, we call a problem decidable iff there exists a single, fixed procedure that can take arbitrary instances of this problem and always behaves like this:

If the correct answer is 'yes', it outputs 'yes'. If the correct answer is 'no', it outputs 'no'.

The original statement is included below for completeness:

Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed.[4]

2.4 This condition is quoted below:

It consists of a finite number of exact, finite instructions.

2.5 Arbitrary extensions to the UTM are valid as long as they obey the constraints above.

2.6 Alternatively, any algorithm or effective method as defined in this post is allowed to be used so long as it is simulatable by a UTM, even multiple UTMs are allowed to simulate something.

Essentially, we can choose to use 2.5 or 2.6 as a condition, and it will be clear that adding either one causes a contradiction, in that we can enlarge the set of functions we can compute beyond the computable functions, thus causing a contradiction, and thus showing it is false.

Critically, it needs to hold for every algorithm/machine following the constraints, and any failure breaks the thesis.

New Comment
10 comments, sorted by Click to highlight new comments since:

I've finally defined how the Church-Turing Thesis will work in the sequence, and I've mostly tried to stay true to Church and Turing, but I did add some changes to better define what certain things about effective methods meant, and I especially clarified how the rigorousness of a method was defined, as opposed to ingenuity, by instead replacing it with a substantially similar notion from Rafael Harth.

This will be an important update because there was a lot of confusion and disagreement on how it should work or be defined.

Glad you’re doing this series and can’t wait to see the meat on points 4, 6 & 7. 😉

For points 1:3 & 5, I expect most complexity theory purists to complain that’s wrong or misleading or both (assuming textbook definitions), and most charitable reading purists to complain they’re not trying hard enough to understand your own definitions. Maybe the correct focus is then to explain why you like your definitions more? Just a thought.

As far as 1 goes, a big part of the reason I want to do this is because I want to generalize computability more, and to look at computers more abstractly. Keep in mind that while the Turing Machine model with unlimited time and space is definitely the most well studied model, and there are other models that are equivalent in power, there are definitely other models that don't have the same powers as this, and there are extensions to Turing Machines where they don't define the same functions. So if computability is defined on Turing Machines without anything else with unlimited time and space, then this will always be equivalent to standard computability, and I will always mention what the abstract machine/computer model is and what constraints or extensions are there, if any, to make it clear what I'm talking about.

3 is essentially the application of the Time Hierarchy Theorem, and one of the most important goals is to show that there are problems where the polynomial constant is arbitrarily large, and this is important since we can only decide a finite upper bound, beyond which the polynomial grows too fast to be tractable, at least assuming thermodynamics applies.

I actually got 5 from this website, which actually is a little interesting, and the link is below:

http://www.amirrorclear.net/academic/ideas/simulation/index.html

It turns out that a Universal Turing Machine can simulate even arbitrary real numbers, like hypercomputers based on the real numbers, by using a branching tree model, in which there are an uncountable infinity of branches, but only a countable infinity of vertices and edges, thus there are enough Turing Machines to simulate all of it. So long as we don't ask for a specific simulation of a machine that encodes the halting problem, which is infinitely complex, we can just simulate the machine, including everything else, since reductionism fails here, in that most bitstrings are infinitely complex, but the set of all sequences of infinite bitstrings, including all finite bitstrings, has very little complexity.

It's a really neat result that I want to demonstrate to talk about the implications of this finding.

Sorry for this comment being so long, I needed to respond to all of the objections properly.

It's a really neat result that I want to demonstrate to talk about the implications of this finding.

Are there any non-trivial ones? Having skimmed the webpage it sounds like the result can basically be summarized as "if you enumerate all possible sequences of 0s and 1s, then one of them encodes a solution to the halting problem". Which is true but IMO doesn't have much implication for how we should think about the solubility of the halting problem.

I didn't suggest that it did exactly solve the halting problem in the way people think, but the key here is that computing and simulating things are very different entities, and one can simulate hard problem solvers without being able to solve the problem, so computability and simulatability are distinct concepts, not the same.

In essence, a Universal Turing Machine can simulate computers that solve the halting problem without itself being able to solve the halting problem. And the main result is Universal Turing Machines can enumerate all possible sequences of 0s and 1s to simulate hyperconputation on the real numbers, so long as you do the simulations in parallel.

I might want to edit the post to explain why computability!=simulatability, or why computability and simulatability are distinct concepts.

As far as 1 goes, a big part of the reason I want to do this is because I want to generalize computability more, and to look at computers more abstractly

It’s a noble cause, and I suspect you did perceive interesting things that I would like to know more about. However, notice that you seem to overcharge operator « computing » to a non standard (from CTP frame) notion for this word. [pause from internet][reading your comment about computability!=simulatability] Yes, this may be exactly that!

So in other words, and conditioned on I guessed your thoughts well, I would keep 2 as is (« in a manner of speaking » helps!), then rephrase 1, 3&5 as:

  • Let’s define simulability as the analog of computability and see why simulability is machine and constraint dependent, and why picking different machines will get you different results in what you can simuate, and why picking different time and space requirements will affect the sets of things you can simulate.
  • Let’s assume some key results from complexity apply to simulability as well, then Cobham's thesis that the practical problems we can solve correspond to P in complexity theory in our specific universe is unlikely to be true, because of the Time Hierarchy theorem as well as issues with the constant potentially being arbitrarily large.
  • The underappreciated amount of things a Universal/Complete Turing Machine can simulate, and that it can actually emulate a halting oracle, even if it couldn't solve the halting problem on it's own, and the implications of that observation.

And then you could conclude with « Equivalently, I'm trying to say that computability and simulatability are different concepts, and one can simulate something without you being able to solve the computational problem it's doing. »

Sorry for this comment being so long, I needed to respond to all of the objections properly.

I loved it! 😉 However I’d love more meat on what problems are better seen through the lens of simulability. Could you elaborate or provide a specific example?

I'll try to mention that the fact that computability is not equivalent to simulatability is the reason I wouldn't edit my post like that. I will always make clear when I'm switching to a non-standard frame.

I loved it! 😉 However I’d love more meat on what problems are better seen through the lens of simulability. Could you elaborate or provide a specific example?

For a good example from a Turing Machine, obviously, the halting function would be the obvious example of simulability but not computability. Basically any function that is turing reducible to the halting problem would be a good example of simulability but not computability for Universal Turing Machines.

Yes, I get that « periodically switching between non computable tasks » is your paradigmatic case for simulability. The question is: what do you think could get accomplished through this definition? In other words, I could take a pen and write « This is a non standard number. », but that would still be just one piece of paper. In what sense is the notion of simulability better than that?

Essentially, I'm using the definition of the first paragraph of this wikipedia article, where you imitate a real world system. although in this case, the model of the real world and the real world are the same object, so the simulation connotations usually used in lower-scale regimes break down, and thus I will be treating simulation and the real world as the same object.

Link is below:

https://en.wikipedia.org/wiki/Simulation

Ok then that’s standard language and there’s no need to redefine it, you’re right.

Let’s try another example if you indulge my curiosity. If I use some cryptographic method you won’t be able to crack it without hypercomputing, but you could simulate cracking it if you could guess the secret. Is that the kind of thing you have in mind?