Comment author: SteveLandsburg 03 February 2010 10:20:33PM 2 points [-]

Splat:

1)

The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself.

This depends on what you mean by "specify". To distinguish N from other mathematical structures requires either an infinite (indeed non-recursive) amount of information or a second order specification including some phrase like "all predicates". Are you referring to the latter? Or to something else I don't know about?

2) I do not know Chaitin's definition of the K-complexity of a structure. I'll try tracking it down, though if it's easy for you to post a quick definition, I'll be grateful. (I do think I know how to define the K-complexity of a theory.) I presume that if I knew this, I'd know your answer to question 1).

3) Whatever the definition, the question remains whether K-complexity is the right concept here. Dawkins's argument does not define complexity; he treats it as "we know it when we see it". My assertion has been that Dawkins's argument applies in a context where it leads to an incorrect conclusion, and therefore can't be right. To make this argument, I need to use Dawkins's intended notion of complexity, which might not be the same as Chaitin's or Kolmogorov's. And for this, the best I can do is to infer from context what Dawkins does and does not see as complex. (It is, clear from context that he sees complexity as a general phenomenon, not just a biological one.)

4) The natural numbers are certainly an extremely complex structure in the everyday sense of the word; after thousands of years of study, people are learning new and surprising things about them every day, and there is no expectation that we've even scratched the surface. This is, of course, a manifestation of the "wildly nonrecursive" nature of T(N), all of which is reflected in N itself. And this, again, seems pretty close to the way Dawkins uses the word.

5) I continue to be most grateful for your input. I see that SIlas is back to insisting that you can't simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody's asserting any such thing; my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system. Whether or not you agree, it's a pleasure to engage with someone who's not obsessed with pummelling straw men.

Comment author: Splat 05 February 2010 09:41:40PM 3 points [-]

Replying out of order:

2) A quick search of Google Scholar didn't net me a Chaitin definition of K-complexity for a structure. This doesn't surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There's also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.

According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.

1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA<sub>2</sub> still has infinite K-complexity because of its lack of complete rules of inference.

Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn't be all that shocking in the first place if we couldn't specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.

3&4) Actually I don't think that Dawkins is talking about K-complexity, exactly. If that's all you're talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there's a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I'd venture.

The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it's unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.

Also, I very much doubt that he had in mind mathematical structures as things that "exist". Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it's clear that there is some perceptible difference involved. It doesn't seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.

Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins "wins", at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.

5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.

Comment author: SilasBarta 03 February 2010 11:14:30PM 1 point [-]

I see that SIlas is back to insisting that you can't simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody's asserting any such thing;

I didn't say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it's not just the axioms that suffice to let you use arithmetic. It's those, plus biological information about squirrels. But this arithmetic is not the infinitely complex arithmetic you talk about in other contexts!

my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system.

You can't -- you need axioms beyond those that specify N. The fact that the biological model involving those axioms uses math, doesn't mean you've described it once you've described the structure N. So whether or not you call that "simulating it in the structure N", it's certainly more complex than just N.

Comment author: Splat 05 February 2010 07:30:06PM *  3 points [-]

I'm responding here to your invitation in the parent, since this post provides some good examples of what you're not getting.

I didn't say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it's not just the axioms that suffice to let you use arithmetic.

Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms. The best way to imagine this in the case of arithmetic is in terms of a structure.

Starting from the definition in that wikipedia page, you can imagine giving the graphs of the universe and functions and relations as Datalog terms. (Using terms instead of tuples keeps the graphs disjoint, which will be important later.) Like so:

  • Universe: is_number(0), is_number(1), ...
  • 0: zero(0)
  • S: next(0,1), next(1,2), ...
  • +: add_up_to(0,0,0), add_up_to(0,1,1), add_up_to(1,0,1)...
  • and so on.

Then you use some simple recursive coding of datalog terms as binary. What you're left with is just a big (infinite) set of binary strings. The Kolmogorov complexity of the structure N, then (the thing you need to use arithmetic) is the size of the shortest program that enumerates the set, which is actually very small.

Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). To actually run them you just need N (the structure).

The fact that you don't seem to understand this point yet leads me to believe you were being a little unfair when you said:

By the way, I really hope your remark about Splat's comment being "enlightening" was just politeness, and that you didn't actually mean it. Because if you did, that would mean you're just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic's relation to the universe.

Now, if you want to complete the comparison, imagine you're creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like:

  • is_time(1:00:00), is_time(1:00:01), ...
  • is_squirrel_state(<eating nut>), is_squirrel_state(<rippling tail>), is_squirrel_state(<road pizza>)
  • squirrel_does(1:00:00, <rippling tail>), ...

The squirrel states, though, will not be described by a couple of words like that, but by incredibly detailed descriptions of the squirrel's internal state--what shape all its cells are, where all the mRNAs are on their way to the ribosomes, etc. The structure you come up with will take a much bigger program to enumerate than N will. (And I know you already agree with the conclusion here, but making the correct parallel matters.)

(Edit: fixed markup.)

Comment author: Splat 03 February 2010 08:14:20PM 2 points [-]

You really need to offer an argument for at least one of these two things to make your point:

  • A utilitarian aiming to maximize subjectively hypervaluable states will not tile orgasmium.
  • It is good to tile orgasmium.
Comment author: Splat 03 February 2010 08:10:23PM 4 points [-]

The neurology, while detailed, seems a little confused. In particular, adding mu-opioid receptors to every neuron in the brain sounds more like a recipe for epilepsy than for superhappiness.

Comment author: Kaj_Sotala 03 February 2010 07:25:27PM 1 point [-]

Question: do people think this post was too long? In the beginning, I thought that it would be a good idea to give a rough overview of DFT to give an idea of some of the ways by which pure utility functions could be made more reflective of actual human behavior. Near the end, though, I was starting to wonder if it would've been better to just sum it up in, say, three paragraphs.

Comment author: Splat 03 February 2010 07:56:00PM 1 point [-]

I found the detail helpful. Even more detail might have been good, but you'd have had to write a sequence.

Comment author: SteveLandsburg 03 February 2010 02:30:38PM 2 points [-]

Splat: Thanks for this; it's enlightening and useful.

The part I'm not convinced of this:

to simulate any biological creature, you need N plus a bunch of biological information

A squirrel is a finite structure; it can be specified by a sequence of A's, C's, G's and T's, plus some rules for protein synthesis and a finite number of other facts about chemistry. (Or if you think that leaves something out, it can be described by the interactions among a large but finite collection of atoms.) So I don't see where we need all of N to simulate a squirrel.

Comment author: Splat 03 February 2010 06:12:49PM *  2 points [-]

Well, if you need to simulate a squirrel for just a little while, and not for unbounded lengths of time, a substructure of N (without closure under the operations) or a structure with a considerable amount of sharing with N (like 64-bit integers on a computer) could suffice for your simulation.

The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself. (How large this size is depends on which abstract computer you used to define your instance of K-complexity, but the asymptotic trend is unavoidable.)

If this seems paradoxical, consider that after a while the shortest computer program for generating an open initial segment of N is a computer program for generating all of N plus instructions indicating when to stop.

Either way, it so happens that the biological information you'd need to simulate the squirrel dwarfs N in complexity, so even if you can find a sufficient substitute for N that's "lightweight" you can't possibly save enough to make your squirrel simulation less complex than N.

Comment author: Eliezer_Yudkowsky 03 February 2010 04:05:06AM 2 points [-]

I'm not sure, but I think it's impossible to construct a computable nonstandard model of the integers (one where you can implement operations like +).

Comment author: Splat 03 February 2010 05:54:37PM 3 points [-]

It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.

The question I'm wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it's beyond the level to which I've read so far.

Comment author: Eliezer_Yudkowsky 03 February 2010 01:08:23AM 2 points [-]

Why expect that we have a canonical model when we talk about sets or predicates if we're entertaining skepticism that we have a canonical model for integer-talk?

We don't. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.

Comment author: Splat 03 February 2010 03:54:01AM 3 points [-]

And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.

I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different 'standard', 'minimal' models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the 'weaker' (because they halt less often) notions of Turing Machines.

Now, I'm not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a 'true' Standard Model and if it doesn't it weighs in favor.

Comment author: SteveLandsburg 02 February 2010 06:53:01PM 0 points [-]

mattnewport: This would seem to put you in the opposite corner from Silas, who thinks (if I read him correctly) that all of physical reality is computably describable, and hence far simpler than arithmetic (in the sense of being describable using only a small and relatively simple fragment of arithmetic).

Be that as it may, I've blogged quite a bit about the nature of the complexity of arithmetic (see an old post called "Non-Simple Arithmetic" on my blog). In brief: a) no set of axioms suffices to specify the standard model of arithmetic (i.e. to distinguish it from other models). And b) we have the subjective reports of mathematicians about the complexity of their subject matter, which I think should be given at least as much weight as the subjective reports of ecologists. (There are a c), d) and e) as well, but in this short comment, I'll rest my case here.)

Comment author: Splat 03 February 2010 03:33:41AM 10 points [-]

Your biggest problem here, and in your blog posts, is that you equivocate between the structure of the standard natural numbers (N) and the theory of that structure (T(N), also known as True Arithmetic). The former is recursive and (a reasonable encoding of) it has pretty low Kolmogorov complexity. The latter is wildly nonrecursive and has infinite K-complexity. (See almost any of Chaitin's work on algorithmic information theory, especially the Omega papers, for definitions of the K-complexity of a formal system.)

The difference between these two structures comes from the process of translating between them. Once explained properly, it's almost intuitive to a recursion theorist, or a computer scientist versed in logic, that there's a computable reduction from any language in the Arithmetic Hierarchy to the language of true statements of True Arithmetic. This implies that going from a description of N to a truth-enumerator or decision procedure for T(N) requires a hypercomputer with an infinite tower of halting, meta-halting, ... meta^n-halting ... oracles.

However, it so happens that simulating the physical world (or rather, our best physical 'theories', which in a mathematical sense are structures, not theories) on a Turing machine does not actually require T(N), only N. We only use theories, as opposed to models, of arithmetic, when we go to actually reason from our description of physics to consequences. And any such reasoning we actually do, just like any pure mathematical reasoning we do, depends only on a finite-complexity fragment of T(N).

Now, how does this make biology more complex than arithmetic? Well, to simulate any biological creature, you need N plus a bunch of biological information, which together has more K-complexity than just N. To REASON about the biological creature, at any particular level of enlightenment, requires some finite fragment of T(N), plus that extra biological information. To enumerate all true statements about the creature (including deeply-alternating quantified statements about its counterfactual behaviour in every possible circumstance), you require the infinite information in T(N), plus, again, that extra biological information. (In the last case it's of course rather problematic to say there's more complexity there, but there's certainly at least as much.)

Note that I didn't know all this this morning until I read your blog argument with Silas and Snorri; I thank all three of you for a discussion that greatly clarified my grasp on the levels of abstraction in play here.

(This morning I would have argued strongly against your Platonism as well; tonight I'm not so sure...)

In response to comment by Psy-Kosh on Welcome to Heaven
Comment author: Blueberry 27 January 2010 07:06:27PM 3 points [-]

Such pills do exist in the real world: a lot of sleeping pills have similar effects, as does consuming significant amounts of alcohol.

Comment author: Splat 01 February 2010 04:35:49AM 2 points [-]

For that matter, so does falling asleep in the normal way.

View more: Next