A brief (re-)introduction

In case you don't want to read introductory post of this sequence, here is a quick recap of the broad context for this work. This post will try to clarify my objectives, and the next one will start to provide some concrete formalisms.

When we want to understand an object or phenomenon, there are at least three approaches:

- zoom in: take the object in isolation (removing all external factors), cut it down until we find elementary components with simple rules (e.g. atoms following Newtonian mechanics), and build back up to understand how the focal object’s behaviors emerge from its composition and structure.

- zoom out: ignore anything that is internal to the object, look at a broader and broader context until we find it is embedded in a system with well-defined boundaries subjected to simple selection rules (e.g. an organism that must survive and reproduce), and go back down to understand what role the focal object plays in that selection.

- look back: observe its sequence of past states and identify simple historical accidents, largely unpredictable from either its instantaneous composition or its instantaneous context, that caused it to jump between states (e.g. phylogeny or etymology).

I will call these three approaches reductionism, telism and historicism. Unfortunately, only reductionism has developped a native mathematical language. The other two approaches are still deeply verbal, and mainly produce just-so stories, where mathematics may occur incidentally to justify some steps, but are never the source of the concepts themselves.

My aim is to take baby steps toward a mathematical formalization of telism (for a start!)

1. Objectives: what would success look like?

Since I believe in reiterating things to death, a simple parallel to clarify the basic concepts:

- Reductionism starts by positing (microscopic) ingredients and assembles them, with dynamical laws being what relates emergent higher-level behaviors to the imposed microscopic ones.

- Telism starts by positing (macroscopic) constraints and dissects them, with functional architecture being what relates emergent lower-level constraints to the imposed macroscopic ones.

Thus, in telism, the basic objects we want to define and manipulate are constraints (= selection rules e.g. is an organism alive?), and the basic phenomena we want to see emerge from these assumptions are functional roles i.e. relationships between constraints on a whole system/organism and constraints on a subsystem/organ.

To be able to formally compare functional roles, we therefore need a model in which we can mathematically define distances in a space of relationships between constraints applied at various levels of organization (think multi-level selection).

Avoiding the usual failure modes, looking in the middle

The ultimate goal is to be able to ask and answer questions about functional structure without referring to non-telic questions such as:

- how is it implemented? (what are its atomic constituents and their rules)

- how could it evolve? (what kind of dynamical trajectories lead to it)

Most scientific inquiries into functional architecture, like systems biology, are so deeply steeped in reductionism that they usually fall back on that approach: they posit various atomic entities with simple rules (e.g. genes, neurons, agents, species), assemble them into a system, and hope for the emergence of exciting behaviors where verbal functional explanations will make sense. The whole field of artificial life demonstrates that it is really hard to get to interesting results this way: for instance, in the Game of Life, we need to find and input by hand very peculiar initial conditions to get arbitrarily complex emergent behaviors. Attempts starting from reductionism tend to get lost in technicalities of their chosen model and rarely reach very novel insights.

On the other hand, we are poorly equipped mathematically to build a purely telic theory with interesting content: there is no example of a science having succeeded at developing a formalism that helps ask and understand purely functional questions, with the partial exception of grammar (though again, mathematicians working on formal grammars have fallen back on purely reductionist questions). Attempts at avoiding reductionism tend to remain forever vague and inapplicable.

Thus, I believe that this project requires a back and forth between imposing some aspects of telism, and operating from a comfort zone of existing mathematical tools. 

I propose to start by building a “Rosetta stone”, i.e. a model where we posit both microscopic units with simple rules, and functional constraints at the macroscopic scale, and where we focus on the mesoscopic scale, trying to explain it both through reductionist tools (e.g. statistical physics) from the unit components, and through telic ideas from the global constraints, hoping that it will help us discern what would be a proper formal expression for the latter. The first challenge is of course to identify what ingredients to put at both the micro and macro ends to obtain interesting phenomena in the middle.

What kind of questions should we be able to answer

Finding the right questions is likely the hardest challenge in this project. The first task in the roadmap below is compiling lists of purely functional questions that have been asked verbally for decades or centuries in various sciences, without ever being formalized nor really answered if and when those sciences became more mathematical.

As a starting point, all studies of function, regardless of field, appear to rely on at least two main conceptual and empirical tools:

1. deletion of an element (gene, word, species…) to test how it contributes by observing how the system fails without it,

2. substitution to test whether two elements can fill the same role.

The fact that substitution is often possible indicates that functioning is not sensitive to all the features of an element, but rather defines classes of equivalence, where many different elements can serve the same role within a given context.

This leads to at least three basic directions of investigation:

- how the deletion of an element will result in partial or total failure of the system – and as we delete different elements, can we cluster failures into distinct failure modes that would define different functional dimensions or roles?

- whether the substitution of an element with another will preserve a working system – can we group elements into classes that can serve the same role? Can we use this to define some relationships between roles, e.g. how much overlap is there between the classes of elements that can serve in different roles?

- how similar the functional role of element X in system S is to the functional role of element X’ in system S’, even if these systems have nothing in common?

The latter question would involve defining a space of roles equipped with a metric, so that we could cluster roles together (classification), and interpolate or extrapolate to neighboring systems, and perhaps even compare functional architectures in different fields (roles of a word in a sentence, a gene in a gene network, an individual in society…) like the reductionist method might uncover similar ODEs in ecology, economics and physics.

What kind of questions would we dream of answering

Beyond these, we can imagine more remote and subtle questions:

- how does functional architecture change when, instead of a single system-wide goal, there are two goals in an adversarial setting? (think of strategy games)

- can we define and investigate interactions between roles, without ever referring how they are implemented in terms of basic elements? (e.g. one role always intrinsically impacts another, or even only acts as second-order modifiers upon others)

=> a general theory along these lines would be universality as in physics, finding unity in structural laws (all matter follows the same basic laws)

- beyond the concrete relationships between roles realized in a specific system, are there abstract relationships between roles in general? For instance, is there a periodic table of functional roles, allowing to decompose particular complex ones into universal simple ones?

=> a general theory along these lines would be universality as in chemistry, finding unity in essence (all matter is made of a few different kinds of things)

Questions tied to the Rosetta Stone

Some other questions must also be answered along the way, but they are not our primary goal, and we should not get forever lost in them. 

Chief among them: what modelling ingredients need to be imposed at both the micro and macro scales to obtain an interesting and legible functional architecture, with several distinct roles each assumed by a distinct subsystem (localized in “space” within the entire system), interacting along clearly defined pathways. It is clear that, even though many real-life systems look like that, that is not a generic outcome of optimizing over arbitrary sets of local ingredients to respect arbitrary global selection rules. 

Again, the evolution or implementation of functional modularity is a reductionist problem and not what we care about here, just an issue we must somehow deal with or sidestep cleverly in order to have interesting model objects to play with in other ways.

2. Basic guidelines for being a science

You may want to skip this if you are interested in getting to the practical heart of the matter -- here, as a very broad philosophical point, I am trying to voice guidelines that I am trying to cling to in order to build a telic science.

I would like, for instance, to avoid storytelling or purely descriptive activities like naturalism (going around and saying "A flamingo was seen to eat a sardine in Lake Titicaca on May 1st, 1956").

Arguability

The first criterion is arguability, or "relaxed falsifiability". There should be rules that I am willing to agree upon in advance, that can be used quite reliably by someone else to convince me that I am wrong.

This mild criterion allows us to go beyond strictly descriptive science, and try to get at the bare minimum of what it means to be objective: the possibility of being wrong.

In archaeology, if anyone anywhere in the world finds a certain type of pottery shards in a higher stratum than another type of pottery, they can cast strong doubt on my belief that the former type is more ancient. Not 100% doubt maybe (maybe an ancient archaeologist found these shards and brought them home) but enough that I should likely question my opinion.

In contrast, if I write a literary essay about how Sartre's writing evokes a suffocating mix of disgust and ennui in me, no one is qualified to tell me I am wrong. Someone might convince me that I did not actually feel disgust (e.g. through intensive psychoanalysis), but that's outside the rules of literary criticism, and it will depend quite strongly on who is doing it and who I am. It will work very unevenly on different holders of the same belief. 

Autonomy

The second criterion is that, playing by those rules, I should discover relationships between phenomena that have as much or more explanatory power than the relationships suggested by a different science or mode of knowledge.

For instance, there is a science of language that is not just a psychology or sociology of language, because sociological arguments have less explanatory power for how a sentence will be constructed than, say, grammatical theory. Sociology might say that a polite sentence is a signal of deference; grammar will say "in English, the grammatical subject comes before the verb", and the latter will tend to be more predictive than the former for a broad class of questions we may have about sentences. 

An important corollary of this criterion is that a science should lead to the discovery of "non-obvious" relationships, i.e. relationships that have more explanatory power than intuitive resemblance or immediate proximity in space or time. Archaeology exists independently because it can find relationships between very distant things that are equally or more convincing than mere spatial closeness. Judgments of intuitive resemblance or immediate proximity are, after all, very basic "sciences" inspired by daily life and non-specific to archaeology.

To make that criterion a tiny bit more rigorous: given that I and another person are willing to accept the rules of both games (say, linguistics and sociology), we should both be able to be convinced that the first explains a certain phenomenon better.

Autonomous formalism

If I specifically want a formal or mathematical science, I want the formalism to explain something that I could not explain with mere words. It is a bit of a "Bechdel test" in the biological and social sciences: it seems very lax, and yet extremely few results actually pass it. Most often, the introduction of math serves to say "see, my verbal intuition can work in this toy model constructed specifically for it to work", and thus, every theoretical paper is full of weaselly sentences like "Biodiversity may influence the maintenance of ecosystems over time".

Analogical rigor

One criterion that is less self-evident than the other two: I believe a science proceeds by a particular type of technical analogy. Notably,

  • finding unity in various phenomena (e.g. saying that light and sound and ripples on water are all instances of a physical object called "wave"), or 
  • using a phenomenon to model another (e.g. using an equation to describe a physical wave, or a toy plane to make predictions about a real plane)

 are both instances of analogy. 

There are prominent differences and resemblances between the objects that are put in analogy, and the analogy is useful because some resemblances are not obvious at first, and careful consideration of those allows to understand new aspects of known objects.  But scientific models are stricter analogies than saying "love is war" or "heaven is a flower".

A simple model for what is happening is that each science comes up with rules of thumb for which aspects of an object are supposed to be part of the analogy (e.g. relative intensity of air flow on a toy plane and on a real plane), and which aspects should be ignored (e.g. actual size or color of the toy plane). Then, a science requires the analogy to be as complete as possible in that limited domain of comparison, going to minute details. If someone was to say "love is war" scientifically, then valid questions would include: what kind of war, with how many soldiers, what type of technology and armament.

Many such features in one object would not have immediate counterparts in the other. This is where analogy allows us to postulate invisible things (e.g. virtual entities) as intermediate steps between visible things. For a very long time, nothing could prove that light was like a wave on a pond in the sense of having little ups and downs in space, or like sand in the sense of being made of little balls. Neither of these features was testable indirectly, let alone directly perceptible.

One of the chief things that makes a science autonomous is its specific baggage of archetypes, i.e. which objects it uses as exemplars (waves on a pond, grains of sand), and tools - physical, conceptual or mathematical - that allow a scientist to relate those archetypes even to objects that do not bear any obvious resemblance to the exemplars (sound or light).

3. Essential exemplars

Here I try to distill some of my much longer (and ongoing) post on Telic intuitions across the sciences, by listing the exemplars I keep going back to in order to guide my thinking, and a few reasons why. They should already be obvious from the introductory post and from the above text, but again, I believe in paraphrase.

1) Anatomy: What do we mean when we say “the function of the heart is to pump blood”?

- that if we remove the heart, our body fails in a way that has to do with blood not moving around

- that the heart can be (at least for some time) replaced by a pump.

The former means that we can associate different functions with different failure modes for the overall organism.

The latter clearly indicates that the heart’s physicality is very underdetermined by its function: nothing in its purpose is asking it to be specifically red and fleshy and heart-shaped and made of tissues that are made of cells that contain DNA. The overwhelming majority of details that you could learn about the heart by taking it in isolation and cutting it into bits are utterly irrelevant to its basic function of moving blood around.

On the other hand, we also learn from this example that substitutability is not absolute but is context-dependent: for instance, a fleshy heart will perform as well as a metal pump up until you reach a certain temperature, but not beyond. Thus something else in the body (membranes, ability to move away...) must ensure that such temperatures are not reached, limiting the kind of contexts we encounter to those where the organs can serve their functions.

2) Grammar is the only science I know which has been consistently good at not confusing "nature" and "function":

Nature/category is what a word is, e.g. noun or adjective or verb... I can ask this question of the word in itself:  is cat a noun or a verb?

Function is what role a word serves in a larger context, e.g. cat serves as the subject in "Cat bites man." You cannot take the word out of context and ask of it, apart from any sentence: is cat a subject or a complement?

The same function can be served by many different natures (e.g. a verb-like thing serves as the subject in "To live forever sounds exciting/tiring."), and the same nature can perform many different functions.

Furthermore, grammar explicitly specifies relationships between functions: syntactic trees represent the fact that the object function relates more closely to the verb function than does the subject function, regardless of what fills them. Unfortunately, syntactic trees cannot fully capture the variety of functions: the fact that certain positions in the tree correspond to subject functions and others to tense, object, etc. must be filled in by hand and does not emerge from the formalism.

3) Chess has a lot of potential for Rosetta stone building. It is a situation where we have a good intuition of functional roles: there are three big phases in a game, sequences of moves serve identifiable purposes within these phases (control of the center, attack on the left or right side, boxing in the king...), each move can be interpreted as serving one or multiple roles within one sequence (and across sequences, e.g. pivoting an attack into a defense), etc.

In addition, these benefit from: a perfectly defined set of micro-level rules and macro-level goals, infinite amounts of data with the ability to generate more through software, and even analytical tools with AI scoring systems allowing us to test counterfactuals and gauge how much each move can influence the chances of success of each player. 

Strategy games are an interesting case of having two opposing global constraints, such that every mesoscopic structure’s role is defined with respect to the two, providing clear cases of multifunctionality and exaptation (e.g. making a defensive move to protect oneself, and later turning it into an attack on the opponent).

 

I will probably rewrite this third section as I refine my way of communicating what is actually important about these exemplars.

Next step: Building a Rosetta stone! 

New Comment