while we think of the past as behind us and the future being in front of us they think of the past in front of them (because they can "see" it) and the future behind them (because they can't see it).
FYI I think like them - does it mean I am not part of us? :)
I regularly have disputes over these classical sequences of apish ancestors transforming into men because I place the more recent behind and following the less recent, while the dominant view is to have the modern man lead his ancestors ranked behind him most-recent-first.
Probably from being born twin I've long entertained a strong intuition that may be written down as "suppose is typical your choice together with what determines it, and take responsibility for the result". There is a temptation to relate it to Kant's imperative, but there are problems (typically) illustrated by the fact that is obvious the relationship of my version to the topic of this page, while not Kant's.
What I don't understand is so much insistence that Occam's Razor applies only to explanations you address to God. Or else how do you avoid the observation that the simplicity of an explanation is a function of whom you are explaining to ? In the post, you actually touch on the issue, only to observe that there are difficulties interpreting Occam's Razor in the frame of explaining things to humans (in their own natural language), so let's transpose to a situation where humans are completely removed from the picture. Curiously enough, where the same issue occurs in the context of machine languages it is quickly "solved". Makes one wonder what Occam - who had no access to Turing machines - himself had in might.
Also, if you deal in practice with shortening code length of actual programs, at some point you have exploited all the low lying fruit; further progress can come after a moment of contemplation made you observe that distinct paths of control through the code have "something in common" that you may try to enhance to the point where you can factor it out. This "enhancing" follows from the quest for minimal "complexity", but it drives you to do locally, on the code, just the contrary of what you did during the "low-lying fruit" phase, you "complexify" rather than "simplify" two distinct areas of the code to make them resemble each other (and the target result emerges during the process, fun). What I mean to say, I guess, is that even the frame proposed by Chaitin-Kolmogorov complexity gives only fake reasons to neglect bias (from shared background or the equivalent).
The introduction, choice of example case, and drift of this post, makes me recall my own "political cartooning" of Bush 10 years ago which is just perfect in this context. It takes the form of the claim that the Python computer language shell passed (a fascinating approximation to) the Turing test with the following patriotic responses to inquiries :
| >>> 'USA' in 'CRUSADE'
| True
| >>> filter(lambda W : W not in 'ILLITERATE','BULLSHIT')
| 'BUSH'
I think it solves lots of problems to view the matter of intelligence as a property of communications rather than one of agents. Of course, this is just a matter of focus, in order to clarify the idea you'll have to refer to agents. Receiving agents first of all, as producing agents are less of a necessity :) Which is in line with the main virtue of the move, that is to reframe all debates and research on intelligence that got naturally promoted by the primitive concern of comparing agent intelligence - to reframe them as background to the real problem which is to evolve the crowds - the mixtures of heterogeneous agent intelligences that we form - towards better intellectual coordination. To be honest and exhibit a problem the move creates rather than solves: how should the arguable characteristic property of math to allow intellectual coordination to progress without exchanging messages, be pictured in ?
Comments