The most complex system is the one that can generate complex system itself, outside of biological reproduction. Based on this definition, human beings are the most complex biological systems that we know, even if it sound too anthropocentric.
We all basically know that complex systems are unpredictable...I'm interested in how others identify complex systems...
Following from your quotes above, you could focus your search on systems for which the accuracy of predictions has been poor.
We are able to personally develop heuristics for evaluating predictions and complex systems, but sharing them with others is really tough.
FYI: This is basically the subject of the book Blink by Malcolm Gladwell. It's by no means a rigorous examination, a little too anecdotal, but you might find it useful.
Following from your quotes above, you could focus your search on systems for which the accuracy of predictions has been poor.
Mmm, reading the post it seems like it's driving towards "unpredictable" as an operational definition of "complex". And, reflecting a bit, I reckon that's not too far from how people actually tend to use the idea of a "complex system".
I think the ability to put thoughts into words is not very domain-specific. Are you an above average creative writer? Or an above average teacher (to much less advanced pupils)? If neither, maybe you're just having problems putting ideas into words in general.
Which would be good news, because that is a much more common problem than the specific instance you're focused on, which means there's probably a body of advice for fixing that.
That could be part of it. I'd also say what is difficult is putting certain types of ideas into words. When people talk about scientific skepticism, for example, what exactly are they saying? Guys like Andrew Gelman or Scott Alexander (or plenty of other smart folks) are able to look through academic research across domains, and something sticks out to them as wrong. They can then go through and try to identify what claims, or assumptions, or statistics, are misguided. But prior to that there is this hunch or indicator that the author's scientific claim is off. They seem to develop this hunch by having a well developed heuristic of what is too complex to casually know, and what isn't.
You see the same thing in economic forecasting, or medicine, where over a long time skilled people start to develop a hunch for when something is off. I think part of this hunch is knowing what is knowable, and what is beyond the pale.
As a slightly contrived example, growing up I was more of a naive rationalist. In my late teen years I learned that almost everything I was taught about drugs was a lie. Not just scheduled drugs, but nootropics as well. My dad is a physician and told me without knowing any of the research, and having read less than me, that it's generally a bad idea to take drugs you don't need. He had no real argument, it was something he'd picked up over years of practicing medicine: Take as few drugs and as few treatments as possible, unless necessary.
Even though there is tons of research on medicine, it's hard to codify and explain the way certain clever established practitioners evaluate when we can rely on our inputs to lead to our desired outputs. These hunches are nonlinear and chaotic, which makes measuring them formally incredibly challenging. I'm probably bringing up medicine as an example due to the recent posts on depression networks from Slate Star Codex, where we also see Scott is getting an intuition or understanding of how these complex systems interact, and when and why the related research or classifications is misguided.
Hi. If you have access to the actual data, try to transfer the time domain diagram to frequency domain. Identify frequencies patterns. Search for possible IF components. They are there (Use TRIZ, or intuition. Your name suggests that you may have hear about TRIZ already). Demodulate by the aid of the 1st and 2nd possible harmonics. Extract the modulator-pattern. Look for phase-shifts of it. If there, that's a complex system.
That's also just the tip of the iceberg. This book is a good guide to identifying and predicting complex time series behavior: https://www.amazon.com/gp/product/0521529026/ref=oh_aui_search_detailpage?ie=UTF8&psc=1
So I guess this is along the lines of: try prediction techniques that are designed to work on complex systems and see how it goes.
This doesn't work when you don't have enough data to do so, of course.
This is a dense comment which I would love to see unpacked with a concrete example in a top level post. Do you have an example of working through this process on some other data?
Hi Guy. Before that, you would enjoy very much studying -not reading- the "The Physics of Wall Street: A Brief History of Predicting the Unpredictable" for some $ from any bookstore, and also some core info related to High Frequency Trading. Start here: http://www.politico.com/agenda/story/2016/09/algorithmic-high-frequency-stock-market-trading-000208 and http://www.businessinsider.com/the-real-problem-with-high-frequency-trading-2016-1 and also google the: High Frequency Trading MIT report and read some of the first links.
As I have studied scientific inference over the past decade, there is one major class of problems that frustrates me the most. It's what I think most people here focus on understanding: How to properly identify a complex system.
We all basically know that complex systems are unpredictable, in certain types of ways, due to incredible differences in outcomes present in small changes in starting parameters. Despite this, there seem to be some complex systems that still follow a certain state or pattern identifiable to a single input. In my field of study (economics) we know that if you increase the money supply inflation follows. Sort of. Lots of people though inflation would follow from quantitative easing following the Great Recession, when it didn't (in fact, if anything the opposite).
In the graph below you see that the market based expected inflation rate in 5 years shot up right as the economy began to tank, for fear of quantitative easing and low interest rates raising inflation (as they had in the past). In reality, so the retrospective story goes, the velocity of money dropped so heavily this countered any inflationary impacts. Then under a year later the market decided it would probably converge to its mean. This is how market based predictions went in arguably the most well understood part of macro-economics.
(I actually worked on a team at the Fed to try to help do this even better later on. Our team was the Financial Econ team, so we used market based measures instead of trying to build formal structural models. Could we predict inflation expectation dynamics better than a random walk? Yeah, but only just by the skin of our teeth. Plus, to paraphrase Tetlock, t's a field rewarded more by mastery of impressive tools than for accuracy)
As far as macro-economics goes, there are few things economists more reliably understand than the relationship between money and inflation. Within simple models (and probably in reality) it's clear that if you increase the monetary supply to infinity, the value is going to drop to zero. I can't prove it based on historical experiment, but it's reasonable to use fragments of past information and models to predict that if the government dropped trillions of dollars from helicopters over major U.S. cities, inflation would shoot up.
Despite this, there seem to be complex nonlinearities that mess with our predictions outside of extreme cases. I like this example because it seems to be a clear case of a complex system's state being determined monotonically by a single parameter in the extremes, yet still being non-predictable within most of our core areas of interest.
In the field of economics my intuition for when to doubt a prediction or model has been refined from years of studying and working in forecasting. It's nothing really taught in a textbook. The only way I try to rationally understand it is that there is a certain type of dynamics inherent in economic forecasting, across time, space, and problem type. Whether it's forecasting monetary policy, or revenue for T.V. sales in Spain, the way humans interact with economic systems follows some type of strange similar pattern. Over the years of working in this field I have an idea for this pattern, but I can't prove it or explain it.
It's similar to what Tetlock writes about in Expert Political Judgement and Superforecasting. Even in his analysis though (which is incredible) he is only able to identify a few general types of people. What I take from it is that there is something pretty heavy going on in the brain of a 'hedgehog' vs. a 'fog', which lets some people filter out a signal from reality more reliably. Still, based on his writing it seems more that hedgehog's fall into a cognitive bias of overconfidence, while Foxes are more humble to the complexity of reality, and more willing to stick to uninformative priors. My guess/prediction is what makes really great foxes is the ability to switch between knowing when you're dealing with a complex system, and when the answer really is obvious.
In Expert Political Judgement Tetlock writes "One could be an excellent forecaster who relies on highly intuitive but logically indefensible guesswork." What is in that guesswork though? There has got to be something going on there, which presents the fact that there are models of the world and systems of thinking that haven't been formalized, but which could let humans or computers properly classify when a system is too complex, and when there is a predictable state of the complex system.
Back to the original topic -- this drives me insane. We are able to personally develop heuristics for evaluating predictions and complex systems, but sharing them with others is really tough.
I'm interested in how others identify complex systems, specifically those related to your fields, and how do you try to communicate them to outsiders?