habryka

I have this high prior that complex-systems type thinking is usually a trap. I've had a few conversations about this, but still feel kind of confused, and it seems good to have a better written record of mine and your thoughts here.

At a high level, here are some thoughts that come to mind for me when I think about complex systems stuff, especially in the context of AI Alignment: 

  • A few times I ended up spending a lot of time trying to understand what some complex systems people are trying to say, only to end up thinking they weren't really saying anything. I think I got this feeling from engaging a bunch with the Santa Fe stuff and Simon Dedeo's work (like this paper and this paper)
  • A part of my model of how groups of people make intellectual progress is that one of the core ingredients is having a shared language and methodology that allows something like "the collective conversation" to make incremental steps forward. Like, you have a concept of experiment and statistical analysis that settles an empirical issue, or you have a concept of proof that settles an issue of logical uncertainty, and in some sense a lot of interdisciplinary work is premised on the absence of a shared methodology and language.
  • While I feel more confused about this in recent times, I still have a pretty strong prior towards something like g or the positive manifold, where like, there are methodological foundations that are important for people to talk to each other, but most of the variance in people's ability to contribute to a problem is grounded in how generally smart and competent and knowledgeable they are, and expertise is usually overvalued (for example, it's not that rare for a researcher to win a Nobel prize in two fields). A lot of interdisciplinary work (not necessarily complex systems work, but some of the generator that I feel like I see behind PIBBS) feels like it puts a greater value on intellectual diversity here than I would.
Nora_Ammann

Ok, so starting with one high-level point: I'm definitely not willing to die on the hill of 'complex systems research' as a scientific field as such. I agree that there is a bunch of bad or kinda hollow work happening under the label. (I think the first DeDeo paper you link is a decent example of this: feels mostly like having some cool methodology and applying it to some random phenomena without really an exciting bigger vision of a deeper thing to be understood, etc.) 

That said, there are a bunch of things that one could describe as fitting under the complex systems label that I feel positive about, let's try to name a few: 

  • I do think, contra your second point, complex systems research (at least its better examples) have a lot of/enough shared methodology to benefit from the same epistemic error correction mechanisms that you described. Historically it really comes out of physics, network science, dynamical systems, etc. The main move that happened was to say that, rather than indexing the boundaries of a field on the natural phenomena or domain it studies (e.g. biology, chemistry, economics), to instead index it on a set of methods of inquiry, with the premise that you can usefully apply these methods across different types of systems/domains and gain valuable understanding of underlying principles that govern these phenomena across systems (e.g. scaling laws shaping biological organisms as well as the growth of cities, etc.)
  • I think a (typically) complex systems angle is better at accounting for environment-agent interactions. There is a failure mode of naive reductionism that starts by fixing the environment to be able to hone in on what system-internal differences produce what differences in the phenomena, and then conclude that all of what drives the phenomena is systems-internal while forget that what they did earlier is artificially fixed the environment in order to reduce the complexity of the problem at hand. It's fine and practically useful often to fix one part of the equation of complex interactions, but you shouldn't forget that that's what you did along the way. Similarly, the complex systems lens tends to be better at paying attention to interactions across levels of abstraction, and the dynamics that emerge from these interactions, which also seem valuable to understanding natural phenomena.
habryka

to instead index it on a set of methods of inquiry, with the premise that you can usefully apply these methods across different types of systems/domains and gain valuable understanding of underlying principles that govern these phenomena across systems (e.g. scaling laws shaping biological organisms as well as the growth of cities, etc.)

Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.

habryka

Hmm, I guess my thoughts on complex systems stuff then kind of branch into two directions: 

Where are the wins? Do we have any success story for this methodology working well? I used to be a fan of network science, but then kind of bounced off of it. I like physics, though physics itself is so large, and has a lot of dysfunction in it, that it really matters which part of the physics methodology is imported.

Ok, so what is the foundation? It does seem kind of like I don't have a good inside view of the methodology of this field. Maybe I should just go on Wikipedia and read the summary of the methodology there. 

Nora_Ammann

A lot of interdisciplinary work (not necessarily complex systems work, but some of the generator that I feel like I see behind PIBBS) feels like it puts a greater value on intellectual diversity here than I would.

Keen to say something about the type of epistemic pluralism that I care about (among others in the context of PIBBSS).

(Generally speaking, I think the "general smarts" concern feels pretty orthogonal to how I am thinking about epistemic pluralism, and it at least feels to me like I'm not forced to make trade offs between the two. We could separately double click on that if you like, but let me first try to argue why I think they are orthogonal in the first place.)

I think one relevant premise for PIBBSS style work (which it shares with the complex systems lens at least as I've framed it above) is some assumption that there are some underlying principles that govern intelligent behvaviour across different systems, substrates and scales. If that is so, that is one approach to dealing with the problem that we don't have direct epistemic access to the sorts of AI systems we're most worried about. But if we think they share some features/principles with other types of systems that implement intelligent behaviour, importantly systems we do have a better degree of epistemic access to, we can now start to triangulate between what we understand about such different systems. This triangulation allows you to gain some more robust insights into those principles that that are substrate/scale/system-agnostic.

habryka

I overall feel pretty sympathetic and interested in studying intelligent behavior in the systems we have. However, I do notice that somehow I can't think of any work in this space that's felt very useful to me, at least in recent years. I really like Eliezer's analysis of evolution as an analogue to AI alignment, and it had a big effect on me. And I like Steve Byrnes' work on studying neuroscience to get insight into the AI Alignment problem, though they feel separate somehow (but that might genuinely just be me gerrymandering), and in as much as the goal is to produce more work like the original LW sequences analysis of evolution and AI Alignment, I would feel pretty excited.

Nora_Ammann

On the foundations, roughly, my feeling is that there are different angles to go about this. Generally I feel a bit hesitant about the frame of "let me learn about complex systems science" - like I think that concept isn't really the most useful way of carving the world (e.g. I think reading the wikipedia page on this will be not that exciting). I do think there  are some complex systems textbooks that are moderately neat if you're looking for some ideas of how you could model different types of systems, and pointers at the math you need for that. But beyond that, at least according to my taste, I'd say think about what natural phenomena you're interested in understanding, and then try find who is doing interesting work on that phenomena, or what ways of modelling it (mathematically) seem productive. My experience I guess is closer to a) interested in understanding certain phenomena, b) finding some work out there that I found productive to engage with, c) noticing that a bunch of that can be labelled complex systems stuff.

habryka

Well, in this case I am asking the specific question of "in as much as there is a field here, what is its methodology?". I do a lot of studying of natural phenomena and am generally searching for good mathematical models, but I rarely end up finding things that are labeled as "complex systems". I usually just like, end up studying biology or physics or AI or math.

habryka

Here is the specific quote of yours that I was thinking about: 

I do think, contra your second point, complex systems research (at least its better examples) have a lot of/enough shared methodology to benefit form the same epistemic error correction mechanisms as you described. Historically it really comes out of physics, network science, dynamical systems, etc.

Which sounds to me like you are implying there is a field here that has an epistemic ratchet that can click forward and make coherent progress over time. But I currently feel like I don't have a good pointer at the mechanism of that ratchet.

Nora_Ammann

Do you know this textbook? I'd say it's a good overview of the "complex systems modelling toolbox". 

If you want a somewhat spicier, or maybe more ambitious vision of what complex systems is about, you could listen to this interview with David Krakauer. My guess is you'd largely bounce off of it, though I do think it's pretty exciting (albeit the interview is denser than it appears to be at first glance). He talks about understanding "telic" phenomena (or some similar terminology), which (my rough paraphrase) he understands as emerging from the specific constraints that you get from adaptive systems that evolve and meta-evolve, etc. IMO this is interesting from an "understanding the foundations of agency/intelligent behavior" angle, and e.g. you end up trying to explain in naturalistic terms how things like "back-wards causation" characteristic to agency/planning can arise from simple dynamics.

In terms of systematic progress,  for one, I think that progress is integrated with other scientific fields too - like complex systems as a field cuts across the more traditional ways of carving scientific fields so I don't think there is an a priori way to attributing progress to either one of them exactly. But I think the mechanisms of progress come from an interplay between [a toolbox of mathematical models (e.g. network science, dynamical systems, control theory, etc.)] and moving between the more abstract and more concrete/empirical. 

Maybe I'm being too conflationary today, but I think that is just the same story as in all other scientific fields, and the main differences is in some of the underlying premises. Maybe the cleanest example is the move from classical economic theories to complexity economics. In the former, you start from a set of assumptions like: rational actor, all your agents are the same, markets are equlibrium systems. And then complexity economics comes along and says: hey guys, good news, we have better math tools now than just arithmetic, so we are now able to relax some of our classical assumptions and can e.g. model economic systems with premises such as: bounded rational agents (with learning & memory dnyamics, etc.), heterogenous agents (e.g. different learning strategies), markets as out of eqiulibiria systems. 

habryka

Do you know this textbook? I'd say it's a good overview of the "complex systems modelling toolbox". 

I don't! I might take a look.

And then complexity economics comes along and says: hey guys, good news, we have better math tools now than just arithmetic, so we are now able to relax some of our classical assumptions and can e.g. model economic systems with premises such as: bounded rational agents (with learning & memory dnyamics, etc.), heterogeneous agents (e.g. different learning strategies), markets as out of equilibria systems. 

Yeah, so this is definitely the kind of thing that sounds like a cool thing to do, but also, does it actually work? Like, I am not that grounded in economics, but I do read a lot of econ bloggers and think a bunch about economics in my own language, and I don't come across the complexity economics perspective a lot, and indeed kind of expect that if I were to reading one of the "top" paper on complexity economics, I would end up feeling disappointed. But I might also just be lacking references here. 

This stands, for example, in contrast to something like cliometrics/econometric history, which I found really valuable, and which has a lot of cool models about how history works, but doesn't feel very complexity-science shaped to me.

Nora_Ammann

However, I do notice that somehow I can't think of any work in this space that's felt very useful to me

I partially agree with you and wish I could point to more and easily legible examples. (At the same time, I don't feel like I have very many examples I find particularly exciting more broadly.)

A few non-comprehensive pointers to more current work:

  • Hierarchical agency/alignment work, amongst other things discussed/worked on by ACS
  • Developing naturalized accounts of intelligent phenomena (e.g. agency, planning, deception, power seeking, mesaoptimisation), where a naturalized account is meant to characterise the underlying mechanisms of a phenomena such that you can identify it also when it occurs at (temporal, spatial) scales you haven't evolved to recognize the phenomena -- with the hope that this can provide more robust ways to do e.g. interpretability and evals
  • Coming to have a more principled understanding of interacting AI systems, e.g. what evolutionary/emergent dynamics from having a bunch of LLM sytems interact with each other in the wild (e.g. prompt evolution, emergence of scaffolded agents with different capability profiles, etc.)
  • Characterising "messy" AI risk scenarios, e.g. multiple transitions, RAAP, multi-multi delegation, ascended economy
habryka

So, the Wikipedia article on complexity economics says: 

The economic complexity index (ECI) introduced by Hidalgo and Hausmann[6][7] is highly predictive of future GDP per capita growth. In Hausmann, Hidalgo et al.,[7] the authors show that the List of countries by future GDP (based on ECI) estimates ability of the ECI to predict future GDP per capita growth is between 5 times and 20 times larger than the World Bank's measure of governance, the World Economic Forum's (WEF) Global Competitiveness Index (GCI) and standard measures of human capital, such as years of schooling and cognitive ability.[8][9]

And like, I don't know, that sounds cool, but also my honest best guess is that this is fake somehow? Like, if I look into this I will find that "past GDP per capita growth" is a better predictor than this economic complexity index, or something as straightforward as that, and the only reason why they can claim this result is because they gerrymandered the alternative somehow.

habryka

Ok, I googled around a bit, and I can't find any obvious takedown that exposes the ECI as being obviously gerrymandered, and Our World in Data (who generally seem reasonable and like they think about this stuff in a cool way) have a favorable article on it on their blog, so I update that there is something more real here.

Nora_Ammann

Yeah, I definitely share some confusion of the visible successes being less than what my model would have predicted. 

This makes me update down a bit on the overall promise of the approach, but I also have uncertainty over other parts of my model, e.g. what success I would expect at what timescales and what "success" would look like. Also I think there are dynamics like "once a thing gets successful it gets more integrated into the mainline and thus less recognisable as the (once) unorthodox approach". Definitely expect some of this to be happening. I know they had some success in modeling e.g. financial crises by dropping the "markets are equilibrium systems"assumption; I remember reading some report (I believe of some international governance body, OECD or the like, but can't remember the details) with economic recommendations around climate change and sustainablility that were very clearly complexity economics inspired that sounded pretty reasonable, and I know a little bit about e.g. this work by Hidalgo which seemed pretty cool based on a relatively shallow look (in part also because it makes a bunch of really cool real-world data accessible). 

Nora_Ammann

Hmm, on the economic index stuff: I mean one simple perspective on this all is something like

  • surely the thing that is really going on in the territory is extremely complex (like, there are historical path dependencies, there is the global economy that the national economy is embedded in, the country's specific resources, infrastructure, work force, etc. etc.)
  • it also surely seems like basically all classical economic models simplify reality A LOT, and that unorthodox approaches are making some of these assumptions more realistic
  • the question is how much the complexification of your model (or that particular complexificiation) buys you in terms of predictive power relative to what you pay in terms of complexity costs
  • One more thing that is weird about a domain like economics is that the economic theorising happens within (and thereby affects) the systems it's trying to predict. Like, when the World Bank issues some predictions, that itself affects e.g. investment flows into the country, interests on deposits etc. This makes economics (among other fields) importantly different from physics where in most cases you are in fact justified to ignore the fact that the theorisers are in the theory.
habryka

One more thing that is weird about a domain like economics is that the economic theorising happens within (and thereby affects) the systems it's trying to predict. Like, when the World Bank issues some predictions, that itself affects e.g. investment flows into the country, interests on deposits etc. This makes economics (among other fields) importantly different from physics where in most cases you are in fact justified to ignore the fact that the theorisers are in the theory.

I agree that there is some reality to this, but I do think this is the kind of effect that feels like it's selected to feel clever, or meta, but doesn't actually matter? Like, I agree that the Fed of course has some effect on what the economy does by saying what it will do, but I find myself skeptical that you have to really take into account the effects of how people will change their behavior after you publish your economic theory, in addition to just like modeling what interest rates the Fed is setting as a target.

Like, I am not denying there is some effect here, but I doubt that it will be large.

This is importantly different from situations where someone makes an empirical observation, and that empirical observation turns out to either be the result of a human-enforced policy, or has turned into a human-enforced policy because of its regularity. For example, I find the story that Moore's law derived a bunch of its robustness from the fact that major semiconductor manufacturers set their internal targets according to Moore's law, kind of promising and interesting.

But that feels different than saying that you need to take into account the self-referential effects of actors in the economy taking your economic theory seriously.

Nora_Ammann

My best guess currently is that it does matter a great deal (though at slightly larger timescales than on the year-to-year prediction say). Like, I think the path dependency of history and social systems (the path dependency of everything that is subject to some form of differential selection) is a big deal. I feel very interested in e.g. what alternative functional economic logics there are, and pretty saddened by the fact that given the reality of physics we might never be able to explore them in this branch. This stuff is definitely scientifically difficult to deal with because it's about counterfactuals we cannot access, so it's hard or maybe impossible to even be calibrated on whether it's a big deal or not.

Nora_Ammann

And I guess this is a good example of where my intuitions are influenced by a complex systems lens, compared to my guess of some other people's intuitions. 

The way I think about the degree of how much path dependency matters here is roughly: one pull is that, if you have some relatively simple complex dynamic, small differences in initial conditions can propagate a great deal, local contingencies make you access different parts of the possibility tree, sometimes in ways that are very hard to reverse. The pull from the other side is if you can point to some mechanisms that actively buffer against such path-dependencies, e.g. some sort of homeostatic pressure that tends to keep the systems within some basin. Both of these mechanisms exist - overall I expect socio-economic history to be shaped more by the former (where e.g. the developmental/life period of a biological organisms is more shaped by the latter). 

habryka

I mean, I am totally fine saying "initial conditions matter a lot, some systems are chaotic, it's hard to predict where they will end up because they are quite sensitive". But that's different from saying "specifically analyzing the self-referential nature of economic theories and policies is worth the bang for your complexity buck". Like, I don't expect that the system will stop being chaotic as soon as you account for that self-referential effect, nor do I expect that the system is chaotic because of the self-referential component that your publication would introduce.

Nora_Ammann

Yeah okay so I agree with you that I expect the effect to be bigger (at the year to decade timescale) for technology design than it is for macroeconomics. (I don't think so much take this as saying that the effect doesn't apply to macroeconomics, and rather that macroeconomics lives at slower time scales. Like, the lens of this sort of path-dependency applies to economic logics, but more so at the decades to centuries time scale.)

habryka

Like, as an example, I think all the basics of microeconomics don't have much of a self-referential effect. The domain of their validity, and their predictions seem robust to that kind of stuff. Supply will equal demand, no matter whether people know about supply and demand. Wages will be sticky, no matter whether people know about wages being sticky (there is probably some effect here, but I think it's very weak).

Nora_Ammann

"specifically analyzing the self-referential nature of economic theories and policies is worth the bang for your complexity buck". Like, I don't expect that the system will stop being chaotic as soon as you account for that self-referential effect, nor do I expect that the system is chaotic because of the self-referential component that your publication would introduce.

Yeah to be clear I agree with this!

Nora_Ammann

Maybe a nearby example that is interesting and we might disagree on more is: there is at least some reading of history where notions of "economic rationality" and the various notions of rationality linked up to the field of game theory emerged during the post-WW2 period, and that those ideas have importantly and manifestly shaped economic, institutional and academic/intellectual developments. This view says something like: a bunch of ideas that seem very natural to (most of) us today are pretty strongly contingent on a pretty narrow time period and a pretty small number of heads shaping these ideas. Related to a sense of "the fish in the water" and "you can never really escape your local ideological context". 

[Not sure it's worth going down this rabbit hole.]

Nora_Ammann

Supply will equal demand, no matter whether people know about supply and demand. 

FWIW, largely agree with that paragraph but maybe worth pointing out: supply equals demand does fail sometimes (e.g. financial crises). We can construct these failures as some sort of "irrationality" in the system, but I find that generally pretty intellectually lazy. It seems important to recognize the limits of a model, and why those limits arise. The fact that complexity economics says they are better at understanding what really happens during things like financial crises I think should earn them a good deal of epistemic points (with the important caveat that I would need to read up on what exactly they were able to show with respect to modelling markets as out-of-equilibria systems, so I am making this argument with an epistemic caveat)

habryka

Yeah, I do think I disagree with this, and it is the kind of thing that does scare me about complexity theory. Like, maybe this is a strawman, but there is an attractor for sciences where the authors of that science start viewing themselves not as reporters on the truth, but as people who view themselves as advocates for a certain way of social organization. 

History is full of this, and most of it is a terribly diseased field because of it, because really very large fractions of it view themselves as intentionally trying to reframe history in a way that positively affects the workings of society today.

And like, I am not in favor of banning any and all discussion of the secondary effects of a publication, and how a publication itself might distort the subject-matter that it is talking about, but overall there are many skulls along this road, and many fields have died because of it.

And I don't know to what degree that is going on when you are talking about studying self-referentiality in the publication and adoption of economic theories, but it feels like it gets close to it.

Nora_Ammann

Yeah, I do think I disagree with this, and it is the kind of thing that does scare me..

Yep, overall strong agree with the entire message. I find myself pretty torn where, on one hand, there are some basic arguments here that I find pretty compelling and make me want to take seriously the social embededness of theorizing -- especially in the context of AI alignment --, and I also hard agree with the skulls and the slippery epistemic slopes. 

I do feel like there is a way to take some of these basic insight onboard in a way that doesn't turn all of your reasoning intellectually fraught, but it sure feels like a tricky balance, and in my experience it feels a bit like people have more or less "epistemic antibodies" for being able to navigate that terrain more or less safely.

(As a side note: I've written/gave some talks on this, and if you were to read/watch them and wanted to let me know whether it made you update towards being more or less worried I end up stumbling too close to the skulls, I'd be keen to hear that.)

(Also, I think this paper has an interesting philosophical discussion on this issue.)

habryka

So, going back a bit more to the top-level. I definitely am on board with something like "man, it does sure really seem like formal models we construct of various societal, intelligent and economic systems seem very unlikely to capture these models at the relevant level of detail and complexity that is necessary to actually make good predictions here".

And then I am pretty into figuring out how to do better. I guess my current answer is something like "well, that's not the job of science, the job of science is to provide a certain type of relatively narrow intellectual input into society's reasoning. The actual job of aggregating and making predictions about large complex systems will mostly happen in the System 1 of a bunch of decision-maker brains, based on a really complicated and messy mixture of inductive and deductive reasoning that we don't really understand". 

habryka

I also somewhat think that deep learning is now good enough that we can probably understand a bunch of systems in the world better by just throwing some large neural nets at them. They definitely have enough parameters and can encode enough simultaneous considerations to give rise to pretty good predictive models of very complicated systems. 

And by doing it via artificial learning systems we have to deal with less of the recursive issues, can control the inputs, and maintain a clearer abstraction of a science (which e.g. can do things like reproduce predictions about complex systems by rerunning a DL training run, which you can't do if you are predicting complex systems by giving information to policy makers, who generally don't appreciate being put into large controlled experiments, or being terminated and then rerun)

Nora_Ammann

Ok I'd want to push back some on what you said about your "current answer for doing better". First, I think it's bad to ignore that science has aspects to it that are not purely descriptive.* I also agree that we really shouldn't go all the way in the other directions, where science becomes basically just an extension of politics. Second, depending on the context, leaving it to the "S1 of a handful of decision makers" seems clearly unsatisfying to me.

*feels worth saying here: we can be somewhat differentiated about how much this is true for what domains/systems. Simon Herbert's The Sciences of the Artificial has been really influential on me with respect to this. The artificial here refers to ~designed artifacts -- importantly for our context: technology and institutions. The domain of the artificial has this weird descriptive-prescriptive dual nature. 

Nora_Ammann

I agree we can leverage LLMs for scientific progress here (which is something that s also part of e.g. Davidad's OAA mega-plan), though it's not gonna be easy and there are important ways we could fuck it up. For example, in Davidad's case (AFAIK) the idea is to get LLMs to write formal models that then get checked line by line by human experts. This is the way in which LLMs can help augment scientific modeling at the moment, but the chceking stage is critical to this being useful rather than harmful.

I don't however see how AI systems inherently face less of the recursiveness issue, at least not by default. In fact, I'm pretty worried about performative predictors.

habryka

Yeah, to be clear, I am not at all into ignoring it. I am saying that for any given paper that is trying to illuminate our understanding of some domain, the vast majority of those papers should mostly ignore these phenomena, bar a few particularly recursive domains. And then as scientists and people who build scientific institutions, we should think about how we can set up processes of inquiry that don't have to deal with this on an ongoing basis, or at least limit the corruption of the relevant forces. 

And yeah, having some science that helps us make those tradeoffs isn't crazy, but I feel like I would want those papers to tackle the relevant considerations directly, instead of somehow having each paper end up being occupied by concerns about its self-referential effects.

habryka

I don't however see how AI systems inherently face less of the recursiveness issue, at least not by default. In fact, I'm pretty worried about performative predictors.

Well, in the case of AI systems you can study and put bounds on the effects of self-referentiality. You can retrain the old system with the same data. You can objectively talk about what predictions it makes. With humans using their system I to predict complex systems you have so much path dependence in each individual that its approximately impossible to control things.

Nora_Ammann

Yeah interesting. I would agree that the level of abstraction of a scientifc paper (at least in vast majority of cases) is not where the recursiveness probelm should be addressed. Which does raise the (improtant) question of where and how exactly the problem should then be addressed. I'm not entirely sure. I guess institutions, or the scientific community as a whole is closer to the right abstraction. I also think that this is where philosophy of science has a relevant complementary role to play. The thing feels a bit different for the engineering domains. In particular, I'd definitely want there to be more talk about alternative AI paradigms (and the safety and governance related features of different such paradigms). Given the current landscape, I actually believe that is among the more promising interventions at the moment (with some inital positive signs having started to occured more recently). 

habryka

Which does raise the (important) question of where and how exactly the problem should then be addressed. I'm not entirely sure. 

Well, in some sense that's what LessWrong is about. "The Art of Rationality". 

It's called an art because indeed it is not a science. It's a more extensive category that in addition to covering the truth-discovering ways of science, also tries to cover much more practical things, like good cognitive habits and intuitions and making sure you eat the right food, and so on. I also think it's good to have some good old science on this topic, but i.e. I find MIRI's work on transparent game theory more relevant here than most complexity science at trying to study things like recursive modeling effects.

Nora_Ammann

Yeah agree that epsitemic communities matter here, and that there is more to truth seeking than just the bare bones 'scientific method'. 

habryka

Ok, seems like it's probably about time to wrap up. I enjoyed this!

Nora_Ammann

Yeah, same! :) Thanks!

habryka

Summarizing where we are leaving things a bit for me: 

  • I am still pretty interested in learning more about wins of complexity theory
  • I feel pretty on board with some of the basic premises of complexity theory, but feel confused whether "a scientific field" is even the right way to work on top of these premises
  • I feel generally skeptical of fields that are too occupied with their own existence or trying to study its own effects, not because it's not real, but because it's just really hard and has all kinds of bad cognitive attractors

I might also give reading or listening to some of the materials you sent over a try, and might leave comments with additional impressions.

New Comment
11 comments, sorted by Click to highlight new comments since:

Do you know this textbook? I'd say it's a good overview of the "complex systems modelling toolbox". 

I will note that I mostly bounced off the mentioned textbook when I was trying to understand what complex systems theory is. Habryka and I may just have different cruxes, because he seems very concerned here about the methodology of the field, and the book definitely is a collection of concepts and math that complex systems theorists apparently found useful, but it wasn't big-picture enough for me when I was just very confused about what the actual goal of the field was.

I decided to listen to the podcast, and found it a far better pitch for the field than the textbook. Previously, when I tried to figure out what complex systems theorists were doing, I was never able to get an explanation which ever took a stand for what wasn't subject to complex systems theory, other than of course any simplification whatsoever of the object you're trying to study.

For example, the textbook has the following definition

which, just, as far as I can tell you can express anything you want in terms of. 

In contrast, David gives this definition on the podcast (bolding my own)

0:06:45.9 DK: Yeah, so the important point is to recognize that we need a fundamentally new set of ideas where the world we're studying is a world with endogenous ideas. We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that's an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it's individual minds or collectives or societies, it doesn't really matter. And we'll get into why it doesn't matter, but for me at least, that's what complexity is. The study of teleonomic matter. That's the ontological domain. And of course that has implications for the methods we use. And we can use arithmetic but we can also use agent-based models, right? In other words, I'm not particularly restrictive in my ideas about epistemology, but there's no doubt that we need new epistemology for theorizers. I think that's quite clear.

And he right away gives the example of a hurricane as a complex chaotic process that he would claim is not a complex system, in the sense he's using the term.

No, I don't think [a hurricane] counts. I think it's not useful. There was in the early days at SFI this desire to distinguishing complex systems and complex adaptive systems. And I think that's just become sort of irrelevant. And in order for the field to stand on its own, I think we have to recognize that there is a shared very particular characteristic of all complex systems. And that is they internally encode the world in which they live. And whether that's a computer or a genome in a microbe, or neurons in a brain, that's the coherent common denominator, not self-organizing patterns that you might find for example, in a hurricane or a vortex or, those are very important elements, but they're not sufficient.

In this framing, it becomes very clear why one would think biology, economics, evolution, neuroscience, and AI would be connected enough to form a field out of studying the intersection, and why many agent foundations people would be gravitating towards it.

This is a shorter 30-min intro to complexity science by David Krakauer that I really liked: https://www.youtube.com/watch?v=FBkFu1g5PlE&t=358s

It's true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer's way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.

What I'd really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I'm thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a "better" answer compared to another. This would be probably also quite a research (software) engineering task.

Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses "see" and emphasize, and what productive they critiques they have to offer to each other. 

Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that's very productive for leading to better discourse (including critiques of those ideas). I think we're a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/"messy transitions" angle on AI risk/safety, and also that progress on this would be productive on many fronts.

If I'm not mistaken there is some work on this in progress from CAIF (?), but I think more is needed. 

(high-level comment)

To me, it seems this dialogue diverged a lot into a question of what is self-referential, how important that is, etc. I don't think that's The core idea of complex systems, and does not seem a crux for anything in particular.

So, what are core ideas of complex systems? In my view:

1. Understanding that there is this other direction (complexity) physics can expand to; traditionally, physics has expanded in scales of space, time, and energy - starting from everyday scales of meters, seconds, and kgs, gradually understanding the world on more and more distant scales.

While this was super successful, with a careful look, you notice that while we had claims like 'we now understand deeply how the basic building blocks of matter behave', this comes with a * disclaimer/footnote like 'does not mean we can predict anything if there are more of the blocks and they interact in nontrivial ways'.

This points to some other direction in the space of stuff to apply physics way of thinking than 'smaller', 'larger', 'high energy', etc., and also different than 'applied'.

 Accordingly, good complex systems science is often basically the physics way of thinking applied to complex systems. Parts of statistical mechanics fit neatly into this, but because being developed first, have somewhat specific brand.

Why it isn't done just under the brand of 'physics' seems based on, in my view, often problematic way of classifying fields by subject of study, and not by methods. I know some personal experiences of people who tried to do, e.g., physics of some phenomena in economic systems, having a hard time to survive in traditionally physics academic environments ("does it really belong here if instead of electrons you are now applying it to some ...markets?")

(This is not really strict; for example, decent complex systems research is often published in venues like Physica A, which is nominally about Statistical Mechanics and its Applications)

2. 'Physics' in this direction often stumbled upon pieces of math that are broadly applicable in many different contexts. (This is actually pretty similar to the rest of physics, where, for example, once you have the math of derivatives, or math of groups, you see them everywhere.) The historically most useful pieces are e.g., math of networks, statistical mechanics, renormalization, parts of entropy/information theory, phase transitions,...

Because of the above-mentioned (1.), it's really not possible to show 'how is this a distinct contribution of complex systems science, in contrast to just doing physics of nontraditional systems'. Actually, if you look at the 'poster children' of some of the 'complex systems science'... my maximum likelihood estimate about their background is physics. (Just googled authors of the mentioned book: Stefan Thurner... obtained a PhD in theoretical physics, worked on e.g., topological excitations in quantum field theories, statistics and entropy of complex systems. Petr Klimek... was awarded a PhD in physics. Albert-László Barabási... has a PhD in physics. Doyne Farmer... University of California, Santa Cruz, where he studied physical cosmology etc. etc.). Empirically they prefer the brand of complex systems vs. just physics.

3. Part of what distinguishes complex systems [science / physics / whatever ... ] is in aesthetics. (Also here it becomes directly relevant to alignment).

A lot of traditional physics and maths basically has a distaste toward working on problems which are complex, too much in the direction of practical relevance, too much driven by what actually matters.

Mentioned Albert-László Barabási got famous for investigating properties of real-world networks, like the internet or transport networks. Many physicists would just not work on this because it's clearly 'computer science' or something, as the subject are computers or something like that. Discrete maths people studying graphs could have discovered the same ideas a decade earlier ... but my inner sim of them says studying the internet is distasteful. It's just one graph, not some neatly defined class of abstract objects. It's data-driven. There likely aren't any neat theorems. Etc.

Complex systems has an opposite aesthetics: applying math to real-world matters. Important real-world systems are worth studying also because of real-world importance, not just math beauty.

In my view AI safety would be a on a better track if this taste/aesthetics was more common. What we have now often either lacks what's good about physics (aim for somewhat deep theories which generalize) or lacks what's good about complexity science branch of physics (reality orientation, assumption that you often find cool math when looking at reality carefully vs. when just looking for cool maths)

To me, it seems this dialogue diverged a lot into a question of what is self-referential, how important that is, etc. I don't think that's The core idea of complex systems, and does not seem a crux for anything in particular.

So, this matches my impression before this dialogue, but going back to this dialogue, the podcast that Nora linked does to me seem to indicate that self-referentiality and adaptiveness are the key thing that defines the field. To give a quote from the podcast that someone else posted: 

We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that's an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it's individual minds or collectives or societies, it doesn't really matter. And we'll get into why it doesn't matter, but for me at least, that's what complexity is. The study of teleonomic matter.

Which sure doesn't sound to me like "applying physics to non-physics topics". It seems to put the self-referentially pretty centrally into the field.

I really enjoyed this dialogue, thanks!

A few points on complexity economics:

The main benefit of complexity economics in my opinion is that it addresses some of the seriously flawed and over-simplified assumptions that go into classical macroenomic models, such as rational expectations, homogenous agents, and that the economy is at equilibrium.  However it turns out that replacing these with more relaxed assumptions is very difficult in practice.  Approaches such as agent-based models (ABMs) are tricky to get right, since they have so many degrees of freedom. However I do think that this is a promising avenue of research, but it maybe it still needs more time and effort to pay off.  Although it's possible that I'm falling into a "real communism has never been tried" trap.

I also think that ML approaches are very complementary to simulation based approaches like ABMs.

In particular the complexity economics approach is useful for dealing with the interactions between the economy and other complex systems, such as public health. There was some decent research done on economics and the covid pandemic, such as Doyne Farmer's work: https://www.doynefarmer.com/covid19-research, who is a well known complexity scientist.

It's hard to know how much of this "heterodox" economics would have happened anyway, even in the absence of people who call themselves complexity scientists. But I do think complexity economics played a key role in advocating for these new approaches.

Having said that: I'm not an economist, so I'm not that well placed to criticise the field of economics.

More broadly I found the discussion on self-referential and recursive predictions very interesting, but I don't necessarily think of that as central to complexity science.

I'd also be interested in hearing more about how this fits in with AI Alignment, in particular complexity science approaches to AI Governance.

for example, it's not that rare for a researcher to win a Nobel prize in two fields

 

This statement is actually quite inaccurate.  Only two individuals have won Nobel Prizes in two different scientific disciplines (pre 2024), and to put this in perspective:

  • Physics: 225 laureates
  • Chemistry: 194 laureates
  • Physiology or Medicine: 227 laureates
  • Economic Sciences: 93 laureates

This totals 739 Nobel Prizes awarded in scientific categories (excluding Peace and Literature). With only two cases of cross-disciplinary laureateship, the occurrence rate is approximately 0.27% (2/739). This is indeed extremely rare.

I mean, it is clearly very vastly above base-rates. Agree that my sentence is kind of misleading here. The correlations across disciplines become more obvious when you also look at other types of achievements.

Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.


The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/article/10.1007/s11229-023-04057-x). So, the question here is, are mathematical methods roughly specific to subject areas, or is there significant mathematical pluralism within each subject area? His findings suggest that, mostly, it's the latter. In other words, if you accept the analysis here (which is rather involved and obviously not infallible), you should probably stop thinking of fields as being united by methodology (thus making complex systems research a genuinely novel way of approaching things).

Key quote from the paper: "if the distribution of mathematical methods were very specific to subject areas, the formula map would exhibit very low distance scores. However, this is not what we observe. While the thematic distances among formulas in our sample are clearly smaller than among randomly sampled ones, the difference is not drastic, and high thematic coherence seems to be mostly restricted to several small islands."

Alas, I don't think that study really shows much. The result seems almost certainly caused by the measure of mathematical methods they used (something kind of like by-character-similarity of LaTeX equations), since they mostly failed to find any kind of structure. 

In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.

Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above?