Below are some notes that I took while trying to understanding what exactly Systems theory is all about.

System

There is no universally agreed upon definition of ‘system’, but in general systems are seen as at least two elements that are interconnected. It is also common for systems to be talked about as if all of the components in the system work together to achieve some overall purpose or goal. The primary goal is often survival. A commonly accepted definition is below (note that the word ‘element’ is often replaced with ‘component’ for generality purposes):

a system is a set of two or more interrelated elements with the following properties (Ackoff, 1981, p. 15-16):

  1. Each element has an effect on the functioning of the whole.
  2. Each element is affected by at least one other element in the system.
  3. All possible subgroups of elements also have the first two properties.

 

Non-systems are generally considered to be single instances or a set of elements that lack interconnections, although these may be part of a system.

Sand scattered on a road by happenstance is not, itself, a system. You can add sand or take away sand and you still have just sand on the road. Arbitrarily add or take away football players, or pieces of your digestive system, and you quickly no longer have the same system. (Meadows, 2009, p. 12)

 

Environment

A systems environment consists of all variables which can affect its state. External elements which affect irrelevant properties of a system are not part of its environment. [...] A closed system is one that has no environment. An open system is one that does. (Ackoff, 1971, p. 663)

The environment is often referred to as the context in which the system is found or as its surroundings. Systems are considered closed if they have no interaction with their environment. It is often the case that systems are considered closed for practicality reasons even though they may not technically be absolutely closed, but just have limited interaction with their environment.

Boundary

The boundary is the separation between the system and environment. The actual point at which the system meets its environment is called an 'interface'. It is often the case that the boundary is not sharply defined and that boundaries are conceptual rather than existing in nature.

As any poet knows, a system is a way of looking at the world. (Weinberg, 1975, p. 52)

It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose. (Meadows, 2009, p. 99)

The system therefore consists of all the interactive sets of variables that could be controlled by the participating actors. Meanwhile, the environment consists of all those variables that, although affecting the system’s behaviour could not be controlled by it. The system boundary thus becomes an arbitrary subjective construct defined by the interest and the level of the ability and/or authority of the participating actors (Gharajedagh, 1999, p. 30-31)

 

Interactions (Inputs/Outputs)

Conventional physics and physical chemistry deal with closed systems (Beralanaffy, 1968, p. 32)

Closed systems are those which are considered to be isolated from their environment. This property of 'closedness' is often required in scientific analysis as it makes it possible to be able to calculate future states with accuracy. The problem is that many systems are open, for example, living organisms are open systems that exchange matter with their environment. A living organism requires oxygen, water and food in order to survive. It gains all of this by interacting with its environment. This interaction has two components: input, that which enters the system from the outside and output that which leaves the system for the environment.

 

Subsystem and supersystem

The environment can itself consist of other systems interacting with their environment. A greater system is referred to as a super system, or suprasystem. A system that contains subsystems is said to have a hierarchy. That is different levels in the system may be different sets of systems. An intuitive idea demonstrating hierarchy, specifically nested hierarchy, is that of Russian nesting dolls. Other types of hierarchies include :

  • Subsumptive containment (“is a” hierarchy) - an example: a square is a polygon which is a shape.
  • Compositional containment (“part of” hierarchy) - an example is considering an aircraft by decomposing it into its subsequent systems e.g. propulsion system and flight-control system, and so on.

(Booch, et al., 2007)

 

Hard and soft systems

Systems are commonly differentiated based on whether they are hard or soft. Hard systems are precise, well defined and quantifiable whereas soft systems are not. With soft systems, the system doesn’t really exist and is instead a label or theory about some part of the world and how it operates. The hard and soft difference is really about different approaches in how to view the world systemically. The hard system approach sees the world as systemic and the soft system approach sees the process of inquiry as systemic:

The use of the word ‘system’ is no longer applied to the world, it is instead applied to the process of our dealing with the world. It is this shift of systemicity (or ‘systemness’) from the world to the process of inquiry into the world which is the crucial intellectual distinction between the two fundamental forms of systems thinking, ‘hard’ and ‘soft. (Checkland, 2000, p.17)

 

Complexity

Complexity is not easy to define. Worse still, it can mean different things to different people. Even among scientists, there is no unique definition of Complexity. Instead, the scientific notion of Complexity – and hence of a Complex System – has traditionally been conveyed using particular examples of real-world systems which scientists believe to be complex. (Johnson, 2009, p. 3)

Some concepts which are related to and sometimes mistaken for complexity are (Edmonds, 1996, pp. 3-6):

  • Size - can be an indication of the general difficulty in dealing with a particular system and the potential for that system to be complex. But, it is not a sufficient definition of complexity as the components of the system also need to be interrelated.
  • Ignorance – complexity can be a cause of ignorance, but other causes are also possible. Therefore, it is not useful at all to conflate the two terms. For example, it is not very helpful to describe the internal state of an electron as complex just because we are ignorant about it.
  • Minimum description size also known as Kolmogorov complexity - By this definition highly ordered expressions come out as simple and random expressions as maximally complex. The problem with this definition is that it is possible to have expressions in which most of the information in unrelated, so as a result the whole is incompressible and large, but ultimately simple. Related to this is the fact that the more interrelations there are, the more compressible the expression is likely to be, but also the more complex. This is the opposite of what would be the case if the minimum description size defined complexity.
  • Variety – some variety is necessary for complexity, but it is not sufficient for it. For example, a piece of atonal music contains more variety than a tonal piece of music, but it is not necessarily more complex.
  • Order and disorder – It is true that complex things exist between order and disorder, but it is better to consider this just as a characteristic of complexity rather than a defining attribute. It is often tough to measure things uniformly and what may appear as disorder may actually be complex. Consider the three below images. If you were told that the last image was created by means of a pseudo-random number generator, then you would likely not view it as complex. This means that the language of representation is important in determining complexity and since the below diagrams do not have an inherent language we have to impose one on them. So, based on the assumption that it was generated randomly it would not be viewed as complex whereas in reality, since the assumption is wrong, it is complex (You can see the image 1 is in image 2 which is in image 3 thereby making image 3 the most complex).

  • Chaos - ”Chaos is the generation of complicated, aperiodic, seemingly random behaviour from the iteration of a simple rule. This complicatedness is not complex in the sense of complex systems science, but rather it is chaotic in a very precise mathematical sense. Complexity is the generation of rich, collective dynamical behaviour from simple interactions between large numbers of subunits. Chaotic systems are not necessarily complex, and complex systems are not necessarily chaotic” (Rickles, Hawe, & Shiell, 2007). Complex systems in general differ to chaotic systems because they contain a number of constituent parts (“agents”) that interact with and adapt to each other over time and can lead to emergent properties. In chaotic systems, uncertainty arises from the practical inability to know the initial conditions of a system. In complex systems, uncertainty is inherent due to the system having emergent properties.
  • Stochastic - In stochastic or random dynamics there is indeterminacy in the future evolution of the system, which can be described with probability distributions. This means that even if the initial conditions were known, there are still many possible states that the system could reach, some states can be more probable than others. A stochastic system is seen as the opposite of a deterministic system. A deterministic system has no randomness in the development of its future states. That is, the future dynamics of a deterministic system are fully defined by their initial conditions. A purely stochastic system can be fully described with little information. Therefore, complexity is a characteristic that is independent of the stochastic/deterministic spectrum.

There are many definitions of complexity. Most of them revolve around the idea that the complexity of a phenomenon is a measure of how difficult it is to describe. One example of a decent definition that avoid the problem described above is:

that property of a language expression which makes it difficult to formulate its overall behaviour even when given almost complete information about its atomic components and their interrelations. (Edmonds, 1996, p. 6)

Another common definition that is used is:

Complexity is the property of a real world system that is manifest in the inability of any one formalism being adequate to capture all its properties. It requires that we find distinctly different ways of interacting with systems. Distinctly different in the sense that when we make successful models, the formal systems needed to describe each distinct aspect are NOT derivable from each other. (Mikulecky, 2005, p. 1)

The second definition highlights the point that complexity often leads to an inability for a single language or single perspective to describe all the properties of a system. This means that multiple languages and different perspectives are required just to understand a complex system. This has a very important consequence. It means that no single perspective is absolutely correct there are multiple truths and values, although some are more correct than others.

 

Organisation

Complexity is normally viewed as being either of the type organised or disorganised. Disorganised complexity problems are ones in which the Law of Large Numbers works. This means that even though there may be a multitude of agents all interacting together their stochastic elements average out and so become predictable (on average) with statistics. Said another way, individual variation tends to reduce potential predictability, but the aggregate behaviour, if the individual variations cancel each out, can be predicted. An example would be rolling a die. The exact outcome cannot be known, assuming the die is not loaded, but if you have a large enough sample size you can know that the average result is (3.5). Problems of organised complexity on the other hand are not problems:

to which statistical methods hold the key. They are all problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole. (Weaver, 1948, p. 5)

 

Complex Systems

Although there is no formally accepted definition of complexity or complex systems, there are a number of intuitive features that appear in many definitions (Heylighen, 2008, pp. 4-7), (Ladyman, Wienser , & Lamber, 2013)

  • Complex systems cannot be too rigid like the “frozen” arrangement of molecules in a crystal or too random like molecules in a gas. But, must have both aspects. They are predictable in some aspects and surprising in others. The intermediate position is called the edge of chaos. The edge of chaos is where you have enough structure or patterns so that the system is not random and at the same time has enough fluidity and emergent properties that it is not deterministic.
  • Complex systems have many components that are connected, distinct, autonomous and to some level mutually dependent. The components are not completely mutually dependent, however, as would be the case in a crystal where the state of one molecule determines the state of all the others.
  • Complex systems have hierarchy i.e. they are made up of different levels of systems. It is important to note that the way that hierarchy works in complex systems is different than in simple systems. Complex systems do not have a central control system and are often not neatly nested instead they have a complex structure with possible interpenetration between the levels. This means that important roles can be played by apparently marginal components. The hierarchy is also not permanent, but can be transformed. Transformation does not imply that hierarchies are going to be destroyed, but they may just be shifted. (Cilliers, 2001)
  • Complex systems are commonly modelled as agents i.e. single systems that act upon their environment in response to event that they experience. Two examples of agents are people and cells. In regards to agents, systems have the following features:
    • The number of agents in a system is generally seen to be in a state of flux as agents can multiply or “die”.
    • Agents are often implicitly assumed to be goal-directed with the primary goal being survival.
    • Agents can impact each other either locally or globally through interaction. An example of global impact would be the ripple produced by a pebble that locally disturbs the surface of the water, but then widens to encompass the whole pond.
  • Processes in complex systems are often non-linear. In a mathematical sense, this refers to the disproportional relationships among variables in equations and the systems represented by those equations and variables. Feedback and mutual interaction between the variables is often the cause of non-linearity. A linear relationship between two quantities means that the two quantities are proportional to each other. For example, if you double the volume of water you also double its weight. In linear systems the superposition principle applies. The net response caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. Non-linear relationships are ones in which the superposition principle does not apply.
  • Complex system with their context dependent components cannot be fragmented into material parts. Simple systems can be.
  • Complex systems are normally open which means that they exchange matter, energy and/or information with their wider environment. 
  • Complex systems often have memory which means that their prior states can influence their current behaviour.

 

The below features are also found and will be described in their own sections below:

  • Complex systems have feedback
  • Complex systems exhibit spontaneous order. That is, they self-organize which also allows robustness.
  • Complex systems have emergent properties. This is summed up in the saying (the whole is greater than the sum of its parts)

Feedback

“feedback” exists between two parts when each affects the other. […] The exact definition of “feedback” is nowhere important. The fact is that the concept of “feedback”, so simple and natural in certain elementary cases, becomes artificial and of little use when the interconnections between the parts become more complex. […]Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole. (Ashby, 1999, p. 54)

Feedback is a circular causal process in which some portion of a system’s output is returned (fed back) into the system’s input. Feedback is an important mechanism in achieving homeostasis also known as steady state or dynamic equilibrium. An example of a feedback mechanism in humans would be the release of the hormone insulin in response to increased blood sugar levels. Insulin increases the body’s ability to take in and convert glucose. This has an overall effect of restoring the blood sugar levels back to what they originally were.

Positive feedback is when small perturbations (system deviations) reinforce themselves and have an amplifying effect. An example is emotional contagion. If one person starts laughing, then this is likely to make others start laughing as well. Another example is the spread of a disease, where a single infection can eventually turn into a global pandemic. In positive feedback the effects are said to larger than the causes. When it is the other way around (the effects are smaller than the causes), then you have negative feedback. Negative feedback is when perturbations are slowly supressed until the system eventually return to its equilibrium state. Negative feedback has a dampening effect.

 

Positive feedback can have an effect of amplifying small and random fluctuations into unpredictable and wild swings in the overall system behaviour, which would then be considered chaotic. Negative feedback makes a system more predictable by supressing the effect of such swings and fluctuations. A consequence of this predictably is a loss of controllability. If negative feedback is present, then a system when pushed out of its equilibrium state will undertake some action to return to it. An example in social systems would be social protest when leaders or governments try to implement unwanted changes.


Interactions that involve positive feedback are very sensitive to their initial conditions. An extremely small and often undetectable change in the initial conditions can lead to drastically different outcomes. This is known as: “the butterfly effect”. The phrase refers to the idea that a change as tiny as the flapping or non-flapping of a butterfly’s wings can have a drastic effect on the weather patterns in another location in the world even going so far as leading to a tornado. Please note that the flapping of the wings does not cause the tornado. They are one instead just one part of the initial conditions that caused the tornado. The flapping wing represents a tiny, seemingly insignificant, change in the initial conditions that turns out to be extremely significant due to a cascading i.e. domino effect.

 

The butterfly effect is actually a concept relating to chaotic systems. It is important to note that if the initial conditions of the chaotic system were unchanged between two simulations to an infinite degree of precision, the outcome of the two will be the same over any period of time. This means that the systems are still deterministic. A similar, but distinct notion in complex systems is the ‘global cascade’ (Watts, 2002). This is basically a network-wide domino effect that occurs in a dynamic network. It has been noted that systems may appear stable for long period of time and be able to withstand many external shocks and then suddenly and apparently for no explicable reasons exhibit a global cascade. For this reason, systems are both robust and fragile. They can withstand many shocks making them robust, but global cascades can by triggered by shocks that are indistinguishable from others which have previously been withstood. Due to the fact that the original perturbations can be undetectable, the outcomes are then in principle unpredictable. 

 

Complex systems tend to exhibit a combination of both positive and negative feedback. This means that the effects from certain changes are amplified and others dampened. This leads to the overall system behaviour being both unpredictable and uncontrollable.

 

Self-organization

Self-organization can be defined as the spontaneous emergence of global structure out of local interactions. “Spontaneous” means that no internal or external agent is in control of the process: for a large enough system, any individual agent can be eliminated or replaced without damaging the resulting structure. The process is truly collective, i.e. parallel and distributed over all the agents. This makes the resulting organization intrinsically robust and resistant to damage and perturbations. (Heylighen, 2008, p. 6)

 

The second law of thermodynamics says that “energy spontaneously tends to flow only from being concentrated in one place to becoming diffused and spread out.” (Lambert, 2015). An illustrating example is the fact that a hot frying pan cools down when it’s taken off the kitchen stove. Its thermal energy ("heat") flows out to the cooler room air. The opposite never happens.

 

The second law of thermodynamics might at first glance appear to be implying that all systems need to degrade and cannot be sustained, but this is not the case. The second law of thermodynamics was formulated based on a separate class of phenomena (steam engines originally) than living systems. The original class relates to steady state phenomena close to thermodynamic equilibrium (having the same thermodynamic properties, e.g. heat). Living and more complex systems are steady state phenomena far from thermodynamic equilibrium. They are not isolated but depend on a steady flux of energy that is dissipated to maintain a local state of organisation.

Metaphorically, the micro level serves as an entropy sink, permitting overall system entropy to increase while sequestering this increase from the interactions where self-organization is desired. (Parunak & Brueckner, 2001, p. 124)

In other words, at the macro level there is an apparent reduction in entropy(measure of the spontaneous dispersal of energy), but at the micro level random processes greatly increase entropy. The system exports this entropy to its environment for example when we breathe we excrete carbon dioxide.

The term waste is not really suitable for the products of excretion because they may actually be used as input for other systems. Plants excrete oxygen which we humans require to survive. A better term is negentropy which is the entropy that a living system exports in order to keep its own entropy low. So, in summary living systems delay decay into thermodynamical equilibrium, i.e. death, by feeding upon negentropy in order to compensate for the entropy that they produce while living or to put it even more simply they suck orderliness from their environment.

An organism stays alive in its highly organized state by taking energy from outside itself from a larger encompassing system and processing it to produce within itself a lower entropy more organized state (Schneider & Kay, 1992, p. 26)

 

Autopoiesis

Regenerative cycling (autopoiesis) is another common feature of self-organizing systems.

To destroy exergy, self-organising systems use the same general strategy: They load high exergy energy into compounds which later will give it away in degraded form. For efficient exergy uptake, a constant supply of compounds with low exergy must be available. These are often provided by an internal organisation supplying the site of exergy loading in the system with degraded material to be "reconstructed". If degraded material exist in ubiquitous amounts, there is no need for the organisation to provide it, but if the material is limited, a cyclic organisation delivering the material to the site of exergy uptake has survival value for the system. The more a substance is limiting, the higher the survival value for an organisation that keeps it within the system and transports it efficiently to the "re-loading" area of the system. This phenomenon is called the regenerative cycle.(Günther, 1994, p. 7)

The reason why more complex systems tend to be nested is that nested complex systems may have a larger capacity to degrade exergy because of the multiple layers of the network reinforcements by feedback than non-nested systems.

 

Dissipative structures

The view of self-organization that has been covered so far leads nicely into ‘dissipative structures’.

In Prigoginian terms , all systems contain subsystems , which are continually "fluctuating." At times, a single fluctuation or a combination of them may become so powerful, as a result of positive feedback, that it shatters the pre-existing organization. At this revolutionary moment-the authors call it a "singular moment" or a "bifurcation point"-it is inherently impossible to determine in advance which direction change will take: whether the system will disintegrate into "chaos" or leap to a new, more differentiated , higher level of "order" or organization, which they call a "dissipative structure." (Such physical or chemical structures are termed dissipative because, compared with the simpler structures they replace , they require more energy to sustain them.) (Prigogine & Stengers, 1984, p. 17)

A whirlpool is an example of dissipative structure and it could have been called ‘doubly dissipative ‘because it requires a continuous flow of matter and energy to maintain its form. When the influx of external energy stops or falls below a certain threshold, the whirl pool will degrade. Other examples of dissipative structures include refrigerators, flames and hurricanes.

 

Attractors

In relation to self-organization, the term attractor will come up frequently. It is a mathematical term which refers to a value or set of values toward which variables in a dynamical system tend to evolve. A dynamical system is a system whose state evolves with time over a state space according to a fixed rule. A state space is the set of value that a process or system can take.

 

Attractors emerge, or at least will get stronger, when systems are moved out of equilibrium. Exergy is the energy that is available to be used. After the system and surroundings reach equilibrium, the exergy is zero.

Exergy is a measure of how far a system deviates from thermodynamic equilibrium […]The existence of an exergy gradient over a system drives it away from equilibrium. […] If a system is moved away from thermodynamic equilibrium by the application of a gradient of exergy, an attractor for the system can emerge for the system to organise in a way that reduces the effect of the applied gradient.[…] An increase of the applied gradient will also increase the strength of the attractor. (Günther, 1994, pp. 5-7)

One of the most common ways in which systems reach these attractors is through simple and random fluctuations which are then amplified by positive feedback. This process is referred to as: “order from noise”, a special case of the principle of selective variety. In summary, “order from noise” means that random perturbations ("noise") cause the system to explore a variety of states in its state space. This exploration increases the chance that the system will arrive into the basin of a "strong" or "deep" attractor, from which it would then quickly enter the attractor itself.

Multiple equilibria occur when several different local regions of the same phase space are attractors.  Minor perturbations can cause the system to shift between different equilibria or attractors, causing abrupt and dramatic changes in the system. 

 

Thresholds

Thresholds mark the borders between different equilibria. This means that crossing crossing thresholds can have dramatic changes in the system. The term 'threshold' is used to broadly define the minimum amount of change that is required before impacts cause bifurcations or are recognized as important or dangerous. Thresholds can also be conditionally dependent. That is, there may be many interdependent thresholds or thresholds that become apparent only after other specific conditions have been met. This along with their dependence on initial conditions, couplings with other system components, and rapid change between multiple equilibria often make thresholds hard to predict accurately.


Path dependence/hysteresis

Path dependence and hysteresis are both related to the phenomenon of system memory and mean that the system cannot be explained simply from the current conditions alone. They tell us that a systems state depends not only on the system dynamics and input, but also on the previous states of the system, such as initial conditions.

Path dependence is the idea that the current state of a system depends on the paths that it has taken. Hysteresis occurs when the removal of a stimulus doesn't result in the system returning to its initial conditions. That is, the system behaves irreversibly. An example of path dependence in the climate system is vegetation cover. There are parts of the world where both dry grassland and wet rain forests are possible, despite having the same climate boundary conditions. The state in which the system stablizes depends on the system's past. It is possible that a fire or deforestation by humans could cause the rainforest to irreversibly become grassland even though the climate boundary conditions remain the same. This is because each vegetation type modifies its local climate and creates stable local conditions for its own existence. Another example, of this is the Arctic sea ice. Once it is lost, sea ice is very hard to regrow sufficiently to be able to subsist through the summer melt, even though thick sea ice could stably persist in the same climate conditions.

Emergence

An intuitive understanding of emergence can be gained by looking at a painting painted with the technique of pointillism. When you look at it up close, all you can see is dots, but as you move further back the overall image begins to resolve. Unfortunately emergence, although it can be understood intuitively, is not a well clarified concept (Corning, 2002, pp. 6-8). 

The concept of emergence is generally seen in contexts where the two metaphysical claims are discussed (Christen & Franklin, p. 1-2):

  1. Ontological monism is the claim that there is only one type of stuff in the world. This opposing view of this is the ‘vitalist’ position, which was promoted by Henri Bergson, for example. The vitalist position posits the existence of a life-substance which is inherently different from the inanimate stuff found in rocks and clouds. This inanimate stuff, or what Henri Bergson would call ‘elan vitale’, is the postulated reason for life’s unique properties. Reductionists and emergentists alike rule out vitalism in favour of ontological monism because they see vitalism as unparsimonious and unscientific,
  2. Hierarchical realism is the claim that any system that is under investigation can be broken into heirachical levels. In every system there is at least two levels. The first is the ‘lower level’ which consists of the parts and the second is the ‘upper level’ which consists of the whole system. The two levels can be connected through:
    • Microdeterminism where the parts in the lower level and their interactions fully determine the behaviour of the whole system
    • Macrodeterminism, also called down causation, where the upper level acts causally on the lower level. “downward causation is basically the result of the structural or functional organisation of the parts on the lower lever (e.g. a feedback mechanism)” (Christen & Franklin, p. 12)

One well known argument for why argues entities of the world, which evolved under disruptive conditions, are likely to be organised hierarchically is (Simon, 1960, p.470):

There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently—new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?

The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down—to answer the phone, say—it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him and the more difficult it became for him to find enough uninterrupted time to finish a watch.

The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the manhours it took Tempus.

 

Emergence can be categorized into a few different types (Christen & Franklin, p. 6-7):

  • Pure phenomenological emergence – sees emergent properties as products of our ignorance or limitations. They are properties or behaviours that are at first sight surprising for the observer, but after a closer look at the lower level are explainable and no longer surprising. Examples of this can be found in chaotic attractors. Also, planetary motion prior to Kepler would have been considered ‘emergent’, but it turned out to be something rather simple (an ellipse). This sense of emergence sees it not as a claim about the universe, but about our understanding of it.
  • Epistemic emergence - consists mainly in properties or behaviours that appear on the higher level, but are reducible in the sense of Nagel-reductionism. Ernest Nagel proposed that theoretical reduction requires "bridge laws" that allow for the translation between the different vocabularies on the different levels. For example, he would claim that there exists “bridge laws” that truly state a law-like relation between any claim from chemistry, say, and a claim in the "reduction base" (physics). The reason for the existence of a theory for the upper level is basically an instrumental one, as the description of the phenomena is more compressed using the upper level theory. One example of this is using agent-based methodologies (ABM).
  • Emergence of macroproperties – is the emergence of macroproperties, of structural or functional organisation in a self-organised process. An example, is a Belousov-Zhabotinsky (BZ) reaction which gains new properties when far from equilibrium.
  • Theoretical emergence – concerns primary laws. In evolutionary theory, for example, laws appear on an upper level because their applications need a certain minimal degree of structure/organisation. That is, physically represented information which can mutate. The question in this case is, if such laws have the same status like the laws on the basic level.
  • Weak causal emergence – tells us that an upper level phenomenon is weakly emergent with respect to the lower level domain when it arises from the lower level domain, but its truths are unexpected given the principles governing the lower level domain. Weakly emergent properties are seen as being capable of being determined through observing or simulating the system, but not through any prior analysis.
  • Strong causal emergence – tells us that the whole is something more than the sum of its parts. That is, strongly emergent properties are higher level phenomena that directly cause qualities in the lower level components and are irreducible to these constituent components.
  • Mystic emergence – posits the existence of laws or macroproperties that appear at a certain level and are impossible to truly understand. They need to be accepted as a primitive component of nature. Examples of this are: vitalism and creationism

 

Adaption

Adaptation is a relationship between a system and its environment. Systems are often classified as adaptable (able to be modified by an external agent) and/or adaptive (able to change itself).

An example problem (Ashby, 1960, p. 11) demonstrating the concept of adaptive behaviour is that of the cat and fire. The cat’s behaviour in response to the fire is likely to at first be unpredictable and inappropriate. It may paw at it or stalk it like it is a mouse or walk unconcernedly onto it. It is unlikely use the fire as a method to achieve homeostasis in body temperature. That is it may sit far from the fire even when cold. Later, when the cat has had enough relevant experience with the fire it will approach the fire and seat itself in a place where the heat is moderate. If the fire is burning low, it will move nearer. If a hot coal happens to fall out, it will jump away. Its behaviour towards the fire is now considered ‘adaptive’.

A form of behaviour is adaptive if it maintains the essential variables within physiological limits.(Ashby, 1960, p. 57)

 

Resilience

Resilience is the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks. (Walker, Holling, Carpenter, & Kinzig, 2004)

A nice way of thinking of resilience is as follows:

I think of resilience as a plateau upon which the system can play, performing its normal functions in safety. A resilient system has a big plateau, a lot of space over which it can wander, with gentle, elastic walls that will bounce it back, if it comes near a dangerous edge. As a system loses its resilience, its plateau shrinks. (Meadows, 2009, p. 77)

Resilience arises from a rich structure of many feedback loops that can work in different ways to restore a system even after a large perturbation. A single balancing loop brings a system stock back to its desired state. Resilience is provided by several such loops, operating through different mechanisms, at different time scales, and with redundancy—one kicking in if another one fails. A set of feedback loops that can restore or rebuild feedback loops is resilience at a still higher level—meta-resilience, if you will. Even higher meta-meta-resilience comes from feedback loops that can learn, create, design, and evolve ever more complex restorative structures. Systems that can do this are self-organizing. (Meadows, 2009, p. 76)

It is important to note that resilience doesn't mean that the system is static or constant. Resilient system can be and often are very dynamic. Short-term oscillations, fluctuations and long cycles of climax and collapse may be the norm. Systems that are constant over time can be un-resilient. This presents a problem because people often desire that systems be measurable and for variations over time periods to be minimised. Most people are unaware of what actually makes a system resilient as it is often hard to see.

Static stability is something you can see; it’s measured by variation in the condition of a system week by week or year by year. Resilience is something that may be very hard to see, unless you exceed its limits, overwhelm and damage the balancing loops, and the system structure breaks down. Because resilience may not be obvious without a whole-system view, people often sacrifice resilience for stability, or for productivity, or for some other more immediately recognizable system property. (Meadows, 2009, p. 77)

 

Complex Adaptive Systems

Many natural systems, e.g. brains, immune systems, societies, are complex adaptive systems. Complex adaptive systems display the complexity of complex systems, but they are also able to adapt and evolve with a changing environment. It is often referred to as co-evolution rather than just as adaptation to a single distinct environment. This is because other systems are in the environment.

 

References

  • Ackoff, R. (1981). Creating the corporate future. New York: John Wiley & Sons.
  • Ackoff, R. (1971). Towards a System of Systems Concepts. Management Science , 661-671.
  • Ashby, W. (1999). An Introduction To Cybernetics. London: Chapman & Hall.
  • Ashby, W. (1960). Design for a Brain: The Origin of Adaptive Behavior. New York: Wiley.
  • Beralanaffy, L. (1968). General System Theory. New York: George Braziller.
  • Booch, G., Maksimchuk, R., Engle, M., Young, B., Conallen, J., & Houston, K. (2007). Object-Oriented Analysis and Design with Applications. Boston: Addison-Wesley.
  • Checkland, P. (2000). Soft Systems Methodology: A Thirty Year Retrospective’. Systems Research and Behavioral Science , 11-58.
  • Christen, M., & Franklin, R. The Concept of Emergence in Complexity Science: Finding Coherence between Theory and Practice.
  • Cilliers, P. (2001). Boundaries, Heirarchies and Networks in Complex Systems. International Journal of Innovation Management , 6-7.
  • Corning, P. (2002). The Re-Emergence of "Emergence": A Venerable Concept in Search of a Theory. Complexity .
  • Edmonds, B. (1996). What is Complexity? - The philosophy of complexity per se with application to some examples in evolution. Manchester: Centre for Policy Modelling.
  • Gharajedagh, i. J. (1999). Systems Thinking. Managing Chaos and Complexity. London: Elsevier.
  • Günther, F. (1994). Self-organisation in systems far from thermodynamic equilibrium. Sweden.
  • Heylighen, F. (2008). Complexity and Self-organization. in: Encyclopedia of Library and Information Sciences, eds. M. J. Bates & M. N. Maack.
  • Johnson, N. (2009). Two’s Company, Three is. In N. Johnson, Simply complexity: A clear guide to complexity theory (p. 3). Oneworld Publications.
  • Ladyman, J., Wienser , K., & Lamber, J. (2013). What is a Complex System? European Journal for Philosophy of Science , 4-10.
  • Lambert, F. (2015, July 2). two. Retrieved from The Second Law of Thermodynamics!: http://secondlaw.oxy.edu/two.html
  • Meadows, D. (2009). Thinking in Systems. London: Earthscan.
  • Mikulecky, D. (2005). Complexity science as an aspect of the complexity of science. In U. o. Liverpool, Worldviews, Science and Us (p. 1). Liverpool.
  • Parunak, H., & Brueckner, S. (2001). Entropy and Self-Organization in Multi-Agent Systems. International Conference on Autonomous Agents, 124-130
  • Prigogine, I., & Stengers, I. (1984). Order out of Chaos: Man’s new dialogue with nature. New York: Bantam books.
  • Rickles, D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. J Epidermiol Community Health
  • Schneider, E., & Kay, J. (1992). Life as a Manifestation of the Second Law of Thremodynamics. Mathematical and Computer Modelling , 25-48.
  • Simon, Herbert A. (1962); The Architecture of Complexity; Proceedings of the American Philosophical Society 106, No. 6; p. 467-482.
  • Walker, B., Holling, C., Carpenter, S., & Kinzig, A. (2004). Resilience, Adaptability and Transformability in Social–ecological Systems. Ecology And Society , http://www.ecologyandsociety.org/vol9/iss2/art5/.
  • Watts, D. (2002). A Simple Model of Global Cascades on Random Networks. Proceedings of the National Academy of Sciences of the United States of America, (pp. 5766-5771). National Academy of Sciences.
  • Weaver, W. (1948). Science and Complexity. American Scientist
  • Weinberg, G. (1975). Introduction to General Systems Thinking. 
New Comment
29 comments, sorted by Click to highlight new comments since:

Everything I've encountered on "systems theory" suggests to me that there is no such thing. The writings generally consist of a large quantity of words about the definitions of other words, but no mathematics and no predictions that were not already there before pulling it into the ambit of "systems".

Are there any counterexamples to this?

I consider control theory to be a part of systems theory and given that you give a talk on the virtues of control theory I think you value it. Apart from that my thoughts about the space:

If you look at Seth Roberts's Shagri-La diet it's based on systems thinking. It gives different answers than the standard nutritional paradgim which thinks that losing weight is about linear effect of eating less and exercising more.

You don't need any math for understanding the Shangri-La diet but you do need a certain intellectual framework that considers systems to be important.

Mathematical predictions are only one aspect a theory can provide. Systems theory provides phenomenlogical primitives that can prevent you from dismissing the Shangri-La as strange and obviously crazy. It provides you with a better ontology that allows you to consider new solutions.

Hakob Barseghyan describes in his HPS100 course very well how the notion of a life force being important for biology came after Newton changed the accepted ontology and that people basically thought that there's matter and that matter interacts via forces with other matter. Our current mainstream ontology of physicalism doens't consider that a place for a vital force exists.

If we go back to Seth Roberts, Seth considers it a good idea to measure the fitness of a human by measuring the reaction time to short math queries. I don't think Seth wrote anywhere that he's measuring a "vital force" by doing so, but if you go back in history you find that people have measured the vital force via reaction tests.

Gunnar Stollberg argues that modern systems biology has a concept like the vital force with self-organisation/autopoiesis. A reemergence of vitalism could help explain why a student person who writes down their ideal life or writes about an emotional trauma for 4 days afterwards lead to significantly less sickness as Laura King showed in "The Health Benefits of Writing About Life Goals".

Systems theory puts us in the position were we don't have to postulate any paranormal chi for a notion like life force to exist. It doesn't need math for that task.

I consider control theory to be a part of systems theory and given that you give a talk on the virtues of control theory I think you value it.

I certainly value the theory of control systems, and I think everyone should know its basic concepts. But the real thing looks like this (and all of the stuff that that links to). This is quite unlike what I've seen under the banner of "systems", including some of the references in the OP.

To be more positive, I get from some of your examples the idea that "systems thinking" means "not being stupid." Specifically, not being the sort of stupid that consists of thinking up a theory that is "obviously" true and failing to see whether it is. I don't have a problem with that sort of "systems theory".

Gunnar Stollberg argues that modern systems biology has a concept like the vital force with self-organisation/autopoiesis.

But he concludes by admitting that biologists have not taken this up (and briefly, absurdly, considers the hypothesis of a conspiracy to suppress it).

This is where it seems to me to wander off into the fog. Vitalism is an idea with no moving parts. As soon as you have moving parts to explain the phenomena that people pointed to and said "vital force!" as an explanation, the notion of vital force, goes away. Likewise autopoiesis. The observation that organisms maintain certain variables fixed despite disturbing influences is not explained by giving the phenomenon a name. The thing for a scientist to do is to discover the mechanism. For example, where does a mammal's body sense its core temperature? How is the reference signal generated, and raised in case of fever? And so on. When the whole story is known, we know not merely that it self-regulates, but how.

To be more positive, I get from some of your examples the idea that "systems thinking" means "not being stupid."

Not being stupid in way where the majority of our society is stupid.

But he concludes by admitting that biologists have not taken this up

Just as the majority of nutrition scientists haven't taken up Seth Robert's Shangri-La diet or did research in that direction.

When hearing the discussion of a topic like homeopathy I never see references to the fact that quizing a patient for two hours for the trauma of his life instead of talking with him for 5 minutes has a good chance to have benefitial health effects. It's well replicated that writing about your trauma's creates health benefits. That's because of the paradigm in which our medicine is practiced.

Calling that mysterious variable that get's raised by writing about trauma "vital force" might not be a good explanation but once you go that step you can ask new research questions. If you allow people to simple call it vital force you allow new questions. Is there a way to measure it? Can we measure the "vital force" a day after the trauma writing and see whether the writing worked at raising the vital force? Can we then use that score to predict days of illness?

The thing for a scientist to do is to discover the mechanism

If you believe that's the only thing scientists are allowed to do then they won't be able to do work where predictions can be made but where the underlying mechanism is illusive.

Would you forbid psychologists from talking about IQ and g because they can't tell you the mechanism in which IQ/g works?

Currently dualism isn't dead. Psychologists who work on the mind are allowed to use the concept of IQ without providing an mechanism but biologists are not allowed to do something similar with a life force metric.

Just be complete, no I don't think that there's a conspiracy that forbids biologists from doing so. It's just that the paradigm and people being stupid. I think systems theory provides a way out.

Would you forbid psychologists from talking about IQ and g because they can't tell you the mechanism in which IQ/g works?

It depends what they say about it. There are observable and fairly robust statistical correlations from which g can be constructed, and g can be used to make (rather weak in the individual case) predictions of various sorts. That does not make g a thing. I predict that if we ever find out how the brain works, g will not be a part of that mechanism, just a rough statistical regularity, as it is at present.

Currently dualism isn't dead. Psychologists who work on the mind are allowed to use the concept of IQ without providing an mechanism but biologists are not allowed to do something similar with a life force metric.

If life force is going to be the same sort of thing as g, it might be useful in medicine, which to a substantial and increasing extent is based on statistical trials with little knowledge of mechanisms. But I don't see it as useful for research into how things work.

If life force is going to be the same sort of thing as g, it might be useful in medicine, which to a substantial and increasing extent is based on statistical trials with little knowledge of mechanisms. But I don't see it as useful for research into how things work.

I think that "finding out how things work" should not be the goal of science. The goal should be to develop models that provide reliable and useful predictions.

Newton postulate gravitation as a force without telling his audience how gravity works. The fact that Newton couldn't explain that slowed down adoption of his model, yet accepting his model brought science a huge step forward. Even on many issues that are about research into how things work. Theories that provide additional predictive power help science advance even if their proponents can't explain everything from the ground up.

To get back to system theory. It allows us to say: "Emergence" when we don't know how something come about and still work with what comes about. When someone tells you that homeopathy doesn't work because there are no infinitively small numbers of atoms he has a valid argument. Our ontological framework doesn't allow the infinitively small numbers of atoms. People who have never heard of systems theory and subfield of it like control theory will have a similar reaction to the Shangri-La diet as to homeopathy. The ontology doens't allow for it.

System theory then allows for an ontology in which it can happen. That's valuable. When you go through a specific example you can also think about what the various words of system theory might be when you apply it to the system you study. That provides you with a structure to model the problem even if you don't have enough data for mathematical modelling.

We have no idea how the set point for blood pressure is that in the human body, but it's worthwhile to think of blood pressure regulation as a sytem that has a set point even if we don't know how that is set. From a medical standpoint we can think differently about the system through looking at it with the lense of system theory.

To get back to the life force, it's good when we get more free and focus on increasing the predictive power of our models without worrying too much about whether we know at the moment the mechanism behind a certain value. Sometimes it can even be useful to free our concepts from wanting to explain mechanisms. A term like Shaken Baby syndrome can be quite problematic if you find out that 1% of the cases of babies with "Shaken baby syndrome" weren't shaken.

The thing for a scientist to do is to discover the mechanism

If you believe that's the only thing scientists are allowed to do then they won't be able to do work where predictions can be made but where the underlying mechanism is illusive.

"Discover", not "have discovered". Newton's work was a step; Einstein finding more of a mechanism was a further step.

I think that "finding out how things work" should not be the goal of science. The goal should be to develop models that provide reliable and useful predictions.

It's difficult to get the latter without the former, if you want to make successful way-out-of-sample predictions. Otherwise, you're stuck in the morass of trying to find tiny signals and dismissing most of your data as noise.

It's difficult to get the latter without the former, if you want to make successful way-out-of-sample predictions.

I think you can do a lot of successful predictions with IQ without knowing the mechanism of IQ. I don't think you build better IQ tests by going into neuroscience but giving the tests to people and seeing how different variables correlate with each other.

Otherwise, you're stuck in the morass of trying to find tiny signals and dismissing most of your data as noise.

I don't think that's true. The present approach of putting compounds through massive screening arrays based on theoretical reasoning that it's good to hit certain biochemical pathways is very noise-laden and produces a lot of false positives. >90% of drug candidats that get put into trials don't work out.

I think "system theory" used to be called cybernetics and (in its contemporary form) was basically invented by Norbert Wiener.

This might be splitting hairs, but I would probably call it a "framework" in the sense that it provides a context (e.g. language and concepts) within which more specific "theories" exist. Which theory, for example, would consider the similarities between feedback mechanisms in financial markets and in ecological systems?

Which theory, for example, would consider the similarities between feedback mechanisms in financial markets and in ecological systems?

Dynamic systems. I am not convinced it gains from being associated with a wider "systems theory".

Of perhaps, like Molière's bourgeois gentilhomme discovering that he had been speaking prose all his life, the message is that "systems thinking" is what I have always been doing?

Of perhaps, like Molière's bourgeois gentilhomme discovering that he had been speaking prose all his life, the message is that "systems thinking" is what I have always been doing?

Might be :-) I think cybernetics / system theory basically dissolved into a set of disciplines or theories, much like natural philosophy did a long time ago or, say, geography did fairly recently.

I think of it more like a particular lens from which to view problems, i.e. it is an alternative to reductionism. But, perhaps it's most useful aspect is that it allows the development of techniques which can be used to simulate complex systems. Ludwig von Bertalanff described the set of theories that together comprise the framework of systems thought in the following passage:

Now we are looking for another basic outlook on the world -- the world as organization. Such a conception -- if it can be substantiated -- would indeed change the basic categories upon which scientific thought rests, and profoundly influence practical attitudes. This trend is marked by the emergence of a bundle of new disciplines such as cybernetics, information theory, general system theory, theories of games, of decisions, of queuing and others; in practical applications, systems analysis, systems engineering, operations research, etc. They are different in basic assumptions, mathematical techniques and aims, and they are often unsatisfactory and sometimes contradictory. They agree, however, in being concerned, in one way or another, with "systems," "wholes" or "organizations"; and in their totality, they herald a new approach. Quoted from: Systems Theories: Their Origins, Foundations, and Development

If you want to deep dive into complex systems, I found this to be useful.

My impression is that there are a few core ideas which get turned into frameworks by different people every few years because rediscovery + the generative effect are more fun/epiphany inducing than reviewing the entire literature.

WRT predictions: systems theory is about modelling, and modelling is always making implicit predictions about the causal structure of the system. The better 'systems theory frameworks' encourage turning these into explicit predictions/tests.

You seem to be tacit assuming that only a quantiative, predictive theory is a theory at all, but as far as general usage goes, the horse has bolted, because we have critical theory, cultural theory and other such handwavey things.

Many things are called theories, but they are not all the same sort of thing. I know little of critical theory or cultural theory, but I have a very slight acquaintance with music theory, so let me say what sort of thing that appears to be to me, and ask if these other "theories", including systems theory, are a similar sort of thing.

Musical theory is not the same sort of thing as the theory of Newtonian mechanics. It is more like (pre-neo-Darwinian) biological taxonomy (although different in important ways I'll come to). That is, it is an activity of classifying things into a framework, a structure of concepts. It makes no predictions, other than that these regularities will continue to be observed. Just as in taxonomy: when you come across a creature that you identify as a heron, you can be sure of a lot of things that you will subsequently observe if you follow it around. But there is no biology here: the classification is based purely on the appearance (perhaps including the results of microscopy) and behaviour of the organism, with no deeper knowledge to tell you how the variety of creatures came to be, or the biochemical processes by which they function. And just as in the history of taxonomy various classification schemes have been proposed, so in music theory there are alternatives to the standard stuff found in elementary textbooks (e.g. Schenkerian theory, Riemannian theory). There are even flamewars over them in internet forums.

Music theory and taxonomy are more like maps of contingent landscapes than a theory predicting things beyond the observed phenomena.

Biological taxonomy differs from music theory in two important ways. Firstly, the organisms exist independently of the taxonomical activity. In contrast, practitioners of music -- composers and performers -- are influenced by the theories. They create music within the frameworks that were derived from the music before them, or deliberately react against them and invent new theories to compose new sorts of music in, such as serialism.

Secondly, the development of biology has put empirical foundations underneath the taxonomical activity. (Here is a history of that process.)

[ETA: Sometimes to the effect of exposing some of the concepts as purely conventional. We know what physically underlies the concept of a species, and also know how fuzzy it can get. For other parts it has demonstrated that, e.g. there is no such thing as a genus, or a family, or a kingdom, any more than one can empirically distinguish twigs, branches and boughs: all the levels above species are just conventions convenient to have.]

No such empirical foundation exists for music. Composers are free to flout anyone's theory of what they are doing, and are ultimately bound only by the limits of the human ear.

So, I can read "cultural theory" and "critical theory" as being the same sort of activity as music theory. But that is at the expense of reading them as making true statements about something outside of themselves. They are descriptive maps, or rather, a multitude of competing and conflicting maps of the same territory. In fact, the activity of cultural theory might even be considered to be more like musical performance than musical theory. One does not go to a lecture in the area of cultural theory, critical studies, semiotics, and the like to learn true things, but to experience an intellectually entertaining assemblage of ideas floating as independently of the real world as an interpretation of a Rorschach blot.

What do you think? And where does this leave systems theory? If systems theory were like to musical performance I would have little use for it, but I think its practitioners intend a more solid connection to the real world than that. Perhaps it is like taxonomy? Or something else?

[-]gjm50

The initial definition is equivalent to one in which clauses 2 and 3 are replaced by "Every element is affected by every other". It seems unlikely that this is intended, both because surely there are plenty of things that count as "systems" in which that isn't true and because it would be easier to say it directly if it were intended. But it's not like it's difficult to see that clauses 2 and 3 have this implication (consider a "subgroup" consisting of just elements A and B; then clause 2 says A is affected by B and B is affected by A).

This leaves me a bit unimpressed with the quality of Ackoff's thinking. (And doesn't do much to dispel my prejudice along the same lines as Richard Kennaway's.)

EDITED to add: It looks as if I misunderstood what Ackoff meant by clause 3. My criticism may therefore be invalid. See discussion downthread.

I don’t think this: “All possible subgroups of elements also have the first two properties” is the same as “All possible subgroups of elements can themselves be considered systems and so must have the first two properties”, which it looks like you are reading it as. This means that rule 2: “Each element is affected by at least one other element in the system” says that the subgroup of elements you have selected can be affected by an element that is in the system, but not in the subgroup of elements you have selected.

For example, imagine that the corners in this square represented four elements and the lines the relations between them.

As per my understanding of the rules, this is a system. The first two rules are obviously true. If you look at the third one with the elements on the left side of the square, then the two selected elements don’t have any relations to each other, but they do have relations to other elements in the system. So, I believe that this passes the rule.

Ackoff talks a little more about it here.

A system is a set of interrelated elements. Thus a system is an entity which is composed of at least two elements and a relation that holds between each of its elements and at least one other element in the set. Each of a system’s elements is connected to every other element, directly or indirectly. Furthermore, no subset of elements is unrelated to any other subset. (Ackoff, 1971, p. 662)

[-]gjm00

Oh! So the subgroups are being considered as elements rather than as systems, and condition 3 is actually saying that every set of elements (other than the whole system, I assume) is affected by something outside itself? (Equivalently, however you partition the elements into two partitions there are influences flowing both ways across the boundary.)

You're right: that's a much more sensible definition, and I retract my claim that Ackoff's definition shows bad thinking. I maintain, however, that it shows bad writing -- though perhaps in context it's less ambiguous.

That last quotation, though. At first glance it nicely demonstrates that he has "your" reading in mind rather than "mine"; good for him. But look more closely at the last sentence. "No subset of elements is unrelated to any other subset". In particular, take two singleton subsets; his condition implies once again that every element is "related to" every other. So maybe I have to accuse him of fuzzy thinking again after all :-).

Beralanaffy

It's "Bertalanffy"

Regenerative cycling (autopoiesis) is another common feature of self-organizing systems. To destroy exergy...

Exergy happens to be a word that most readers likely don't know and you don't define it.

Beralanaffy

Errata: "Beralanaffy". Corrige: "Bertalanffy".

One good definition of emergence is that it is:

the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems

Not that good, since novelty is subjective. If the underlying system is deterministic then whatever emerge is predictable given enough computational power.

I rewrote that section.

It has supervenience (downward causation) - The system shapes the behaviour of the parts (roads determine where we drive)

That isn't how supervenience is usually defined.

I rewrote that section, but I meant it like the upper-level properties of a system (where people drive, for example, are determined by its lower level properties (roads).

Thanks to ScottL for writing this concise yet (apparently) thorough overview of systems theory. I've long been curious about systems theory, mostly because the term systems biology sounds interesting, and this helps scratch that itch.

I may "Ankify" it, at least for org-drill.

RichardKenneway's posts here also added a lot of value. Based on this introduction, I basically agree that systems theory is a map without much predictive value. But I'll add that a map, or a vocabulary if you will, is useful in that it lets us indicate what we're talking about.

ScottL's Ludwig von Bertalanff quote indicates that systems theory was invented about the time we started thinking about systems in general for real - biological systems, software systems, etc. At some point, you start needing some more precise language to use to build predictive theories.

BTW, I think of Marxism's dialectic as more-or-less of a systems theory. Like systems theory, the dialectic has passionate adherents, and a lot of people who think it's incoherent. I find it very moderately useful.

I would it appreciate it if you could Ankify this knowledge and find one sentence descriptions of the individual terms to be able to learn them with Anki.

FYI I am not going to be doing this because:

  1. Wikipedia has a list.
  2. I don't think the time spent trying to create one sentence descriptions would help me to understand the terms better. They would help me memorize them, but that isn't my goal.

I don't think the time spent trying to create one sentence descriptions would help me to understand the terms better. They would help me memorize them, but that isn't my goal.

In my experience trying to focus on the essence of a concept does help with understanding it better. But if Wikipedia already has a good list I will use that.