I would say no.
Thanks for the examples and links! Hmm... the programming language examples definitely give me some more "grounding" for the concepts. I also just discovered your recent post, but to be honest a lot of it like the type two theory of effectivity is above my head at the moment.
I looked deeper into this and learned lots about meta-math. I also rediscovered logical Induction as ingenious and super interesting though I don't understand it deeply yet:
Similarly, given any polynomial-time method for writing down computer programs that halt, logical inductors learn to believe that they will halt roughly as fast as the source codes can be generated. Furthermore, given any polynomial-time method for writing down computer programs that provably fail to halt, logical inductors learn to believe that they will fail to halt roughly as fast as the source codes can be generated. When it comes to computer programs that fail to halt but for which there is no proof of this fact, logical inductors will learn not to anticipate that the program is going to halt anytime soon, even though they can’t tell whether the program is going to halt in the long run. In this way, logical inductors give some formal backing to the intuition of many computer scientists that while the halting problem is undecidable in full generality, this rarely interferes with reasoning about computer programs in practice.
I am trying to get a better understanding of the philosophical implications of non-computable functions, and this is some fix idea I had to which I don't know the answer: For a Problem Π: Π undecidable => Π infeasible to calculate even when making the domain of all the variables finite? (Clarification below)
One well known result in CS-Theory is that there is a large class of problems that are "undecidable" meaning there does not exist a Turing machine that can determine the answer to your problem for all natural numbers. One example would be the "busy beaver game" where you try to find the Turing machine with n different states that produces the largest amount of 1s on the tape. What is impressive about the busy beaver function, is that it explodes really quickly, and I believe the values are only known for values up to n=5. It is relatively easy to show that determining whether a Turing machine is a busy beaver is undecidable for an arbitrary n.
It recently realized though that the whole non-computability thing and the Church Turing Thesis for that matter is kinda cheap since we humans are actually not always interested in an algorithm that works for all natural numbers, and something that works for all n up to tree(3) or similar would be good enough. By the Myhill Nerode Theorem, every finite language is regular and thus decidable => any busy beaver that you could run on a finite Turing machine with tree(3) available memory slots (which would be equivalent to a finite state automaton) is decidable (I am not 100% percent sure this dodges something like rices theorem for regular Turing machines, which might imply that these automatons would be hard to find). In general, this suggests to me that for any problem where you actually have the hardware to solve it, there also exists the requisite "algorithm".
In principle, I would not see a concern to adding the assumption "and n < tree(4)" to your inductive hypothesis in your inductive proof that your algorithm is correct.
In practice, of course, it seems hard to leverage this assumption for anything other than brute force, and that what you end up with could be something other than a table of the correct solutions.