The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.
A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.
I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one's own world view.
I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.
I expected the reaction with the countably infinite models, but I did not expect it to be the first. ;)
I wanted to get into that in the write up, but I had to stop at some point. The argument is that in order to have scientific theories, we need to have falsifiability, which means that this always necessarily deals with a discrete projection of the physical world. On the other hand so far every discrete manifestation of physical systems seemed to be able to be modelled by Turing machines. (This assumption is called the Church thesis.) If you add these two, then you arrive by the above conclusion.
Another reason for it not being a problem is that every (first order) axiom system has a countable model by the theorem of Lowenheim-Skolem. (Yes, even the theory of real numbers have a countable model, which is also known as the Skolem paradox.)
Actually, I don't think that the technical content of the write-up is novel, it is probably something that was already clear to Turing, Church, Goedel and von Neumann in the 40-50ies. IMO, the takeaway is a certain, more pragmatic, way of thinking about logic in the age information processing, instead of sticking to an outdated intuition. Also the explicit recognition that the domains of mathematical logic and AI are much more directly connected than it would seem naively.
The object of life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane.
Marcus Aurelius
I agree with you. I also think that there are several reasons for that:
First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer.
The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities."
Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)
I found the most condensed essence (also parody) of religious arguments for fatalism in Greg Egan's Permutation City:
Even though I know God makes no difference. And if God is the reason for everything, then God includes the urge to use the word God. So whenever I gain some strength, or comfort, or meaning, from that urge, then God is the source of that strength, that comfort, that meaning. And if God - while making no difference - helps me to accept what's going to happen to me, why should that make you sad?
Logically irrefutable, but utterly vacuous...
You should not take the statement too literally: Look it in a historical context. Probably the biggest problems at Russel's time were wars caused by nationalism and unfair resource allocation due to bad (idealistic/traditionalist) policies.. Average life expectancy was around 40-50 years. I don't think anyone considered e.g. a mortality a problem that can or should be solved. (Neither does over 95% of the people today). Population was much smaller. Earth was also in a much more pristine state than today.
Times have changed. We have more technical issues today, since we can address more issues with technology, plus we are on a general trajectory today, which is ecologically unsustainable if we don't manage to invent and use the right technologies quickly. I think this is the fundamental mistake traditional ecological movements are making: There is no turning back. We either manage to progress rapidly enough to counteract what we broke (and will inevitably break) or our civilization collapses. There is no stopping or turning back, we have already bet our future. Being reasonable would have worked 100 years ago, today we must be very smart as well.
I just find it a bit circular that you want evidences for the assertion saying that assertions need evidences.
I'd prefer Sunday or Saturday (9/5 would work for me.)
Why? Do you agree with him? :)
First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:
"In order to define semantics, we need to define a map between the logic to the model ...."
It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: "For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold."
Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.
The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.
It is more model checking: Given a statement like "For each x: F(x)", since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.
I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:
foreach x: exists y: prime(y) and prime(y+2)
Still this expression can be cast into the form: "T(x) halts for every input x.", where T is the program that searches for the next pair of twin primes both bigger than n.
But what about longer sequences of quantifier, like
f = foreach a: exists b: foreach c: exists d: .... F(a,b,c,d,...)
Can this still be casted into the form "T(x) halts for every input x"? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.
In fact, there is a neat trick to avoid that: You can define a program T'(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T'(n) does halt for every natural number n.
Which means we manage to hide the cascade of quantifiers within algorithm T'. Cool, hugh?