I'm responding here to your invitation in the parent, since this post provides some good examples of what you're not getting.
I didn't say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it's not just the axioms that suffice to let you use arithmetic.
Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms. The best way to imagine this in the case of arithmetic is in terms of a structure.
Starting from the definition in that wikipedia page, you can imagine giving the graphs of the universe and functions and relations as Datalog terms. (Using terms instead of tuples keeps the graphs disjoint, which will be important later.) Like so:
is_number(0)
, is_number(1)
, ...zero(0)
next(0,1)
, next(1,2)
, ...add_up_to(0,0,0)
, add_up_to(0,1,1)
, add_up_to(1,0,1)
...Then you use some simple recursive coding of datalog terms as binary. What you're left with is just a big (infinite) set of binary strings. The Kolmogorov complexity of the structure N, then (the thing you need to use arithmetic) is the size of the shortest program that enumerates the set, which is actually very small.
Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). To actually run them you just need N (the structure).
The fact that you don't seem to understand this point yet leads me to believe you were being a little unfair when you said:
By the way, I really hope your remark about Splat's comment being "enlightening" was just politeness, and that you didn't actually mean it. Because if you did, that would mean you're just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic's relation to the universe.
Now, if you want to complete the comparison, imagine you're creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like:
is_time(1:00:00)
, is_time(1:00:01)
, ...is_squirrel_state(<eating nut>)
, is_squirrel_state(<rippling tail>)
, is_squirrel_state(<road pizza>)
squirrel_does(1:00:00, <rippling tail>)
, ...The squirrel states, though, will not be described by a couple of words like that, but by incredibly detailed descriptions of the squirrel's internal state--what shape all its cells are, where all the mRNAs are on their way to the ribosomes, etc. The structure you come up with will take a much bigger program to enumerate than N will. (And I know you already agree with the conclusion here, but making the correct parallel matters.)
(Edit: fixed markup.)
You really need to offer an argument for at least one of these two things to make your point:
The neurology, while detailed, seems a little confused. In particular, adding mu-opioid receptors to every neuron in the brain sounds more like a recipe for epilepsy than for superhappiness.
I found the detail helpful. Even more detail might have been good, but you'd have had to write a sequence.
Well, if you need to simulate a squirrel for just a little while, and not for unbounded lengths of time, a substructure of N (without closure under the operations) or a structure with a considerable amount of sharing with N (like 64-bit integers on a computer) could suffice for your simulation.
The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself. (How large this size is depends on which abstract computer you used to define your instance of K-complexity, but the asymptotic trend is unavoidable.)
If this seems paradoxical, consider that after a while the shortest computer program for generating an open initial segment of N is a computer program for generating all of N plus instructions indicating when to stop.
Either way, it so happens that the biological information you'd need to simulate the squirrel dwarfs N in complexity, so even if you can find a sufficient substitute for N that's "lightweight" you can't possibly save enough to make your squirrel simulation less complex than N.
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I'm wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it's beyond the level to which I've read so far.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different 'standard', 'minimal' models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the 'weaker' (because they halt less often) notions of Turing Machines.
Now, I'm not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a 'true' Standard Model and if it doesn't it weighs in favor.
Your biggest problem here, and in your blog posts, is that you equivocate between the structure of the standard natural numbers (N) and the theory of that structure (T(N), also known as True Arithmetic). The former is recursive and (a reasonable encoding of) it has pretty low Kolmogorov complexity. The latter is wildly nonrecursive and has infinite K-complexity. (See almost any of Chaitin's work on algorithmic information theory, especially the Omega papers, for definitions of the K-complexity of a formal system.)
The difference between these two structures comes from the process of translating between them. Once explained properly, it's almost intuitive to a recursion theorist, or a computer scientist versed in logic, that there's a computable reduction from any language in the Arithmetic Hierarchy to the language of true statements of True Arithmetic. This implies that going from a description of N to a truth-enumerator or decision procedure for T(N) requires a hypercomputer with an infinite tower of halting, meta-halting, ... meta^n-halting ... oracles.
However, it so happens that simulating the physical world (or rather, our best physical 'theories', which in a mathematical sense are structures, not theories) on a Turing machine does not actually require T(N), only N. We only use theories, as opposed to models, of arithmetic, when we go to actually reason from our description of physics to consequences. And any such reasoning we actually do, just like any pure mathematical reasoning we do, depends only on a finite-complexity fragment of T(N).
Now, how does this make biology more complex than arithmetic? Well, to simulate any biological creature, you need N plus a bunch of biological information, which together has more K-complexity than just N. To REASON about the biological creature, at any particular level of enlightenment, requires some finite fragment of T(N), plus that extra biological information. To enumerate all true statements about the creature (including deeply-alternating quantified statements about its counterfactual behaviour in every possible circumstance), you require the infinite information in T(N), plus, again, that extra biological information. (In the last case it's of course rather problematic to say there's more complexity there, but there's certainly at least as much.)
Note that I didn't know all this this morning until I read your blog argument with Silas and Snorri; I thank all three of you for a discussion that greatly clarified my grasp on the levels of abstraction in play here.
(This morning I would have argued strongly against your Platonism as well; tonight I'm not so sure...)
For that matter, so does falling asleep in the normal way.
Replying out of order:
2) A quick search of Google Scholar didn't net me a Chaitin definition of K-complexity for a structure. This doesn't surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There's also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn't be all that shocking in the first place if we couldn't specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don't think that Dawkins is talking about K-complexity, exactly. If that's all you're talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there's a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I'd venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it's unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that "exist". Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it's clear that there is some perceptible difference involved. It doesn't seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins "wins", at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.