My confusion stems from this weird dichotomy[1] that basically everything we can compute in the real world can be computed by a finite state automaton, because our world does not contain infinite memory tapes (as far as I know from the probabilistic evidence I was able to gather). But (for convenience?), mathematicians/computer scientists use Turing machines/Lambda Calculus/(your favorite programming language) for most purposes instead to model things, even though they have these really peculiar undecidability properties.
1) Imagine if the whole of the world was a giant, spherical game of life board. The memory isn't infinite, but it continues to advance by the rules as long as the sun burns.
2) I've seen an argument for doing this along the lines of 'Allowing for infinite, means that if someone comes up with a faster (that is, finite) way of calculating something then it may still have already been reasoned about before.'
Yes, I already knew that, thank you! But why should I care if I can calculate it to an arbitrary/sufficient precision? [5]
Do you care about whether pi^pi is an 'irrational' number?
Why is infinite tape required for undecidability to be an issue?
- Imagine if the whole of the world was a giant, spherical game of life board. The memory isn't infinite, but it continues to advance by the rules as long as the sun burns.
Yeah, that makes sense to me.
Why is infinite tape required for undecidability to be an issue?
Because otherwise I can list all the states and transitions. Take your game of life board: We don't get an issue like a halting problem: if you have a finite game of life, with n cells then there exist 2^n possible states, since the game is deterministic, it must repeat after at least 2^n steps in exactly the same sequence of states (halting in this universe might be interpreted as the same step repeating over and over, which is rather rare in the game of life I believe). Now we compute this sequence from within an even bigger game of life. The "real" problem is that the number of possible states is growing exponentially and there is no even bigger version of our universe to turn to, which is why we can't even solve chess. As TAG highlighted in his comment, in some sense chess (at least currently) is undecidable for us (just not in the very rigorous sense).
- I've seen an argument for doing this along the lines of 'Allowing for infinite, means that if someone comes up with a faster (that is, finite) way of calculating something then it may still have already been reasoned about before.'
Yeah, that is probably a good reason to do it. The first time I stumbled upon this was probably a semitechnical introductory dialogue to solomonoff induction.
Hm... I definitely associate rationals with "the numbers my computer has problems printing or representing internally". I guess the part why I care is the "representing internally"-part of the association. Now I don't know, but I could totally see someone having figured out a way to neatly work with some important irrational numbers, at least to some degree, since people can obviously do it and somehow have been able to show that e^(i*pi) is rational. I guess there is probably a neat way to work with infinite series internally?
If a number has an infinite decimal.expansion, it is irrational, and if there is a finite programme that spits out the digits indefinitely, then it's computable. There is a countable infinity of programmes, but an uncountable infinity of irrational.numbers, so most irrational numbers are uncomputable.
I definitely associate rationals with “the numbers my computer has problems printing or representing internally
You seem to be thinking in terms of fixed length representations. Although widely used , they are not essential to computation, and don't feature in theoretical computer science.
(Sorry my Bad. I meant irrational in the previous comment) Yeah, I get that you can do it differently. I meant that you can't just naively store all the digits. Did minimal googeling and and found the Wikipedia article on computable numbers and it lists all kinds of representations that I would never have thought of.
But (for convenience?), mathematicians/computer scientists use Turing machines/Lambda Calculus/(your favorite programming language) for most purposes instead to model things, even though they have these really peculiar undecidability properties.
FSMs also have undecidability: for each one there an infinity of programmes they can't even run.
Yes, I think in the real world there are these much more restraining concerns that undecidability does not (often?) become a real issue.
But it's useful to know theoretically that you can't solve a problem with any finite resources, because it stops you throwing resources at it.
I agree now and have changed my mind here (e.g. the Turing machine/equivalent formalism is "useful"). What changed it: I've since seen a few proofs by contradiction that were trying to show something impossible that I'd judge useful that were doing so by showing that you could decide something known to be undecidable, for example, decide whether ZFC is consistent (a contradiction assuming ZFC consistent).
Mostly just reflections on things I've been learning about theory of computer science and trying to figure my philosophical stance towards them, in order to not be a trollable mathematician
In the last few months, my mind has constantly been trying to argue against something like the "naive Church Turing Thesis" in an attempt to better understand what it means for "the real world" that some things we can ask ourselves with a Turing machine can't be answered with "yes" or "no" by any Turing machine (formally called an undecidable problem). (See the halting problem, for an example).
My confusion stems from this weird dichotomy[1] that basically everything we can compute in the real world can be computed by a finite state automaton, because our world does not contain infinite memory tapes (as far as I know from the probabilistic evidence I was able to gather). But (for convenience?), mathematicians/computer scientists use Turing machines/Lambda Calculus/(your favorite programming language) for most purposes instead to model things, even though they have these really peculiar undecidability properties.
What I've learned so far is that applying your formal model of computation to say things about problems in the real world is a super philosophically slippery domain (though I feel the reason for this has more to do with the space of all algorithms just being a really resistant hydra and less with peculiarities of the human mind[2]). The replies I got from people on Lesswrong along with Scott Aaronsons writing have been very helpful here[3].
Maybe I am just a pedant, but after tripping myself for a while, I see other people commit philosophical blunders in this area. As an example, here is this motte-and-bailey [4]which I fell for:
Motte-and-Bailey computability
Motte:
"intuitively computable" => a (Turing Machine)/(Lambda Calculus)/(Your favorite equivalent model C) can do it. The converse does not necessarily hold, because no one has infinite memory. When doing theoretical computer science, we want to abstract away the details, so it is convenient to use C to be able to extrapolate Algorithm X to infinity in order to force us to characterize how X is behaving asymptotically (which is itself of course only a convenient way to roughly characterize the runtime of an algorithm).
Bayley:
"Let me show you a problem that even computers can't solve": "shows the halting problem"
=> modus tollens, because C cannot do it => "not something you can solve (in this absolutely general case) with a computer"
Implicitly implied:
=> There are some basic things that you cannot do with a computer.
Even more implicitly implied, perhaps at least something I fell for:
The version of this problem (not fully general) you'd actually care about is hard. Say that two pieces of code are indeed equivalent, or type checking. While this might not work in the general case, there obviously are heuristics that do, otherwise people wouldn't be able to write code in typed languages or do "anything at all in math/logic".
How to actually deal with undecidability in practice
I felt there was something deeply unsatisfying about the whole issue. As long as your Turing machines have finite memory, they are actually finite state automatons whose behavior is perfectly decidable. For more practical purposes, it's like saying, "your computer isn't able to print all the digits of pi!" Yes, I already knew that, thank you! But why should I care if I can calculate it to an arbitrary/sufficient precision? [5] Doron Zeilberger expresses this more succinctly:
This seemed appealing as I could not figure out what is wrong with the above, but there are people who already did more reflection on this than I have, Scott Aaronson:
My problem with the above is that I still don't think I understand the implications. "How do axioms interact with the real world exactly?" is a question I don't have a good cached answer for.
At the moment the thing I reach for when I need to sleep at night is something like "ok even if some things are undecidable, and I don't really understand all the implications of this, but least I can go meta, spot the patterns and put probabilities on things." [6].
If you read this far, you've probably also thought about these thinking about these things sometimes. I'd be really interested to hear other people's stances on these topics. Please do share any thoughts or resources you've found "deconfusing" on your way in the comments!
Further Interesting resources/Footnotes
While writing this I also stumbled on Scott Aaronson's Thesis which is going in a similar direction (though he seems to be more concerned in characterizing what quantum computers should be capable of).
Scott Aronsons Paper on the busy beaver frontier was pretty shocking in how few states you need to make incomprehensible or even undecidable questions with a Turing machine.
I'd super interested in anyone pointing to resources that resolve this issue ↩︎
I know confusion is in the mind, by the way Yudkowsky makes a similar point here ↩︎
I'll mention work from Scott Aaronson a lot, because I keep stumbling over them and noticing how these answer exactly the question I had and some I didn't even know I had them (If you feel this is excessive, and I've missed more insightful resources by other authors, Please share them in the comments, I'd love to know about them!). ↩︎
A motte-and-bayley refers to the situation where you use a defensible version of your views (the motte) when actively being critiqued to be able to advance the actually interesting, but "indefensible version when no one's watching" (the bailey). ↩︎
And yes then there is the whole "what is efficiently computable?" question, but I felt complexity classes were not as prone to philosophical blundering, and this is already rather long. ↩︎
A good example that I stumbled upon while writing this that made me less confused around "how do axiomatized systems interact to the real world" and how to actually cope with undecidability is Scott Aaronson arguing for why a proof of P!=NP should not be independent of ZFC(page 26-27 Section "Independent of set theory?") ↩︎