Do you know this textbook? I'd say it's a good overview of the "complex systems modelling toolbox".
I will note that I mostly bounced off the mentioned textbook when I was trying to understand what complex systems theory is. Habryka and I may just have different cruxes, because he seems very concerned here about the methodology of the field, and the book definitely is a collection of concepts and math that complex systems theorists apparently found useful, but it wasn't big-picture enough for me when I was just very confused about what the actual goal of the field was.
I decided to listen to the podcast, and found it a far better pitch for the field than the textbook. Previously, when I tried to figure out what complex systems theorists were doing, I was never able to get an explanation which ever took a stand for what wasn't subject to complex systems theory, other than of course any simplification whatsoever of the object you're trying to study.
For example, the textbook has the following definition
which, just, as far as I can tell you can express anything you want in terms of.
In contrast, David gives this definition on the podcast (bolding my own)
0:06:45.9 DK: Yeah, so the important point is to recognize that we need a fundamentally new set of ideas where the world we're studying is a world with endogenous ideas. We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that's an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it's individual minds or collectives or societies, it doesn't really matter. And we'll get into why it doesn't matter, but for me at least, that's what complexity is. The study of teleonomic matter. That's the ontological domain. And of course that has implications for the methods we use. And we can use arithmetic but we can also use agent-based models, right? In other words, I'm not particularly restrictive in my ideas about epistemology, but there's no doubt that we need new epistemology for theorizers. I think that's quite clear.
And he right away gives the example of a hurricane as a complex chaotic process that he would claim is not a complex system, in the sense he's using the term.
No, I don't think [a hurricane] counts. I think it's not useful. There was in the early days at SFI this desire to distinguishing complex systems and complex adaptive systems. And I think that's just become sort of irrelevant. And in order for the field to stand on its own, I think we have to recognize that there is a shared very particular characteristic of all complex systems. And that is they internally encode the world in which they live. And whether that's a computer or a genome in a microbe, or neurons in a brain, that's the coherent common denominator, not self-organizing patterns that you might find for example, in a hurricane or a vortex or, those are very important elements, but they're not sufficient.
In this framing, it becomes very clear why one would think biology, economics, evolution, neuroscience, and AI would be connected enough to form a field out of studying the intersection, and why many agent foundations people would be gravitating towards it.
This is a shorter 30-min intro to complexity science by David Krakauer that I really liked: https://www.youtube.com/watch?v=FBkFu1g5PlE&t=358s
It's true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer's way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.
What I'd really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I'm thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a "better" answer compared to another. This would be probably also quite a research (software) engineering task.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses "see" and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that's very productive for leading to better discourse (including critiques of those ideas). I think we're a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/"messy transitions" angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I'm not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.
(high-level comment)
To me, it seems this dialogue diverged a lot into a question of what is self-referential, how important that is, etc. I don't think that's The core idea of complex systems, and does not seem a crux for anything in particular.
So, what are core ideas of complex systems? In my view:
1. Understanding that there is this other direction (complexity) physics can expand to; traditionally, physics has expanded in scales of space, time, and energy - starting from everyday scales of meters, seconds, and kgs, gradually understanding the world on more and more distant scales.
While this was super successful, with a careful look, you notice that while we had claims like 'we now understand deeply how the basic building blocks of matter behave', this comes with a * disclaimer/footnote like 'does not mean we can predict anything if there are more of the blocks and they interact in nontrivial ways'.
This points to some other direction in the space of stuff to apply physics way of thinking than 'smaller', 'larger', 'high energy', etc., and also different than 'applied'.
Accordingly, good complex systems science is often basically the physics way of thinking applied to complex systems. Parts of statistical mechanics fit neatly into this, but because being developed first, have somewhat specific brand.
Why it isn't done just under the brand of 'physics' seems based on, in my view, often problematic way of classifying fields by subject of study, and not by methods. I know some personal experiences of people who tried to do, e.g., physics of some phenomena in economic systems, having a hard time to survive in traditionally physics academic environments ("does it really belong here if instead of electrons you are now applying it to some ...markets?")
(This is not really strict; for example, decent complex systems research is often published in venues like Physica A, which is nominally about Statistical Mechanics and its Applications)
2. 'Physics' in this direction often stumbled upon pieces of math that are broadly applicable in many different contexts. (This is actually pretty similar to the rest of physics, where, for example, once you have the math of derivatives, or math of groups, you see them everywhere.) The historically most useful pieces are e.g., math of networks, statistical mechanics, renormalization, parts of entropy/information theory, phase transitions,...
Because of the above-mentioned (1.), it's really not possible to show 'how is this a distinct contribution of complex systems science, in contrast to just doing physics of nontraditional systems'. Actually, if you look at the 'poster children' of some of the 'complex systems science'... my maximum likelihood estimate about their background is physics. (Just googled authors of the mentioned book: Stefan Thurner... obtained a PhD in theoretical physics, worked on e.g., topological excitations in quantum field theories, statistics and entropy of complex systems. Petr Klimek... was awarded a PhD in physics. Albert-László Barabási... has a PhD in physics. Doyne Farmer... University of California, Santa Cruz, where he studied physical cosmology etc. etc.). Empirically they prefer the brand of complex systems vs. just physics.
3. Part of what distinguishes complex systems [science / physics / whatever ... ] is in aesthetics. (Also here it becomes directly relevant to alignment).
A lot of traditional physics and maths basically has a distaste toward working on problems which are complex, too much in the direction of practical relevance, too much driven by what actually matters.
Mentioned Albert-László Barabási got famous for investigating properties of real-world networks, like the internet or transport networks. Many physicists would just not work on this because it's clearly 'computer science' or something, as the subject are computers or something like that. Discrete maths people studying graphs could have discovered the same ideas a decade earlier ... but my inner sim of them says studying the internet is distasteful. It's just one graph, not some neatly defined class of abstract objects. It's data-driven. There likely aren't any neat theorems. Etc.
Complex systems has an opposite aesthetics: applying math to real-world matters. Important real-world systems are worth studying also because of real-world importance, not just math beauty.
In my view AI safety would be a on a better track if this taste/aesthetics was more common. What we have now often either lacks what's good about physics (aim for somewhat deep theories which generalize) or lacks what's good about complexity science branch of physics (reality orientation, assumption that you often find cool math when looking at reality carefully vs. when just looking for cool maths)
To me, it seems this dialogue diverged a lot into a question of what is self-referential, how important that is, etc. I don't think that's The core idea of complex systems, and does not seem a crux for anything in particular.
So, this matches my impression before this dialogue, but going back to this dialogue, the podcast that Nora linked does to me seem to indicate that self-referentiality and adaptiveness are the key thing that defines the field. To give a quote from the podcast that someone else posted:
We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that's an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it's individual minds or collectives or societies, it doesn't really matter. And we'll get into why it doesn't matter, but for me at least, that's what complexity is. The study of teleonomic matter.
Which sure doesn't sound to me like "applying physics to non-physics topics". It seems to put the self-referentially pretty centrally into the field.
I really enjoyed this dialogue, thanks!
A few points on complexity economics:
The main benefit of complexity economics in my opinion is that it addresses some of the seriously flawed and over-simplified assumptions that go into classical macroenomic models, such as rational expectations, homogenous agents, and that the economy is at equilibrium. However it turns out that replacing these with more relaxed assumptions is very difficult in practice. Approaches such as agent-based models (ABMs) are tricky to get right, since they have so many degrees of freedom. However I do think that this is a promising avenue of research, but it maybe it still needs more time and effort to pay off. Although it's possible that I'm falling into a "real communism has never been tried" trap.
I also think that ML approaches are very complementary to simulation based approaches like ABMs.
In particular the complexity economics approach is useful for dealing with the interactions between the economy and other complex systems, such as public health. There was some decent research done on economics and the covid pandemic, such as Doyne Farmer's work: https://www.doynefarmer.com/covid19-research, who is a well known complexity scientist.
It's hard to know how much of this "heterodox" economics would have happened anyway, even in the absence of people who call themselves complexity scientists. But I do think complexity economics played a key role in advocating for these new approaches.
Having said that: I'm not an economist, so I'm not that well placed to criticise the field of economics.
More broadly I found the discussion on self-referential and recursive predictions very interesting, but I don't necessarily think of that as central to complexity science.
I'd also be interested in hearing more about how this fits in with AI Alignment, in particular complexity science approaches to AI Governance.
for example, it's not that rare for a researcher to win a Nobel prize in two fields
This statement is actually quite inaccurate. Only two individuals have won Nobel Prizes in two different scientific disciplines (pre 2024), and to put this in perspective:
This totals 739 Nobel Prizes awarded in scientific categories (excluding Peace and Literature). With only two cases of cross-disciplinary laureateship, the occurrence rate is approximately 0.27% (2/739). This is indeed extremely rare.
I mean, it is clearly very vastly above base-rates. Agree that my sentence is kind of misleading here. The correlations across disciplines become more obvious when you also look at other types of achievements.
Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.
The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/article/10.1007/s11229-023-04057-x). So, the question here is, are mathematical methods roughly specific to subject areas, or is there significant mathematical pluralism within each subject area? His findings suggest that, mostly, it's the latter. In other words, if you accept the analysis here (which is rather involved and obviously not infallible), you should probably stop thinking of fields as being united by methodology (thus making complex systems research a genuinely novel way of approaching things).
Key quote from the paper: "if the distribution of mathematical methods were very specific to subject areas, the formula map would exhibit very low distance scores. However, this is not what we observe. While the thematic distances among formulas in our sample are clearly smaller than among randomly sampled ones, the difference is not drastic, and high thematic coherence seems to be mostly restricted to several small islands."
Alas, I don't think that study really shows much. The result seems almost certainly caused by the measure of mathematical methods they used (something kind of like by-character-similarity of LaTeX equations), since they mostly failed to find any kind of structure.
In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.
Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above?