I’ve been writing a fair amount recently about how I think there are important links between complexity and decision-making errors in the context of financial markets that should be studied in order to find new ways of increasing the robustness of the global financial system. But while most people have heard of the problems of heuristics or have read books like Dan Ariely‘s Predictably Irrational (which I highly recommend, BTW), the study of complex systems is a bit more arcane, to me as much as anyone. Accordingly, I just read two interesting papers from a 1999 issue of Science that have provided me with food for thought on the subject. The first, by Goldenfeld and Kadanoff (G+K) is called “Simple Lessons from Complexity“. The second, by W Brian Arthur is called “Complexity and the Economy“. Here are a few takeaways:
First, there are lots of definitions of complexity and complex systems. Wikipedia uses Rocha as a source, saying that “A complex system is any system featuring a large number of interacting components, whose aggregate activity is nonlinear and typically exhibits self-organization under selective pressures.” But there are three key points here:
1) complex systems demonstrate large variations within themselves, have many parts, are sensitive to initial conditions, are tough to model but often obey simple laws;
2) in a complex system, individual interactions might be linear – but aggregate activity is not; and
3) complex systems often appear highly structured at one scale (think of a tornado) but highly chaotic on another (imagine a fly’s perspective in a tornado – I borrowed this analogy from G+K).
Second, and building on point 2) above (also from G+K), it’s probably best to understand the aggregate outputs of complex systems (e.g. market crashes) by looking at the problem from the correct scale and level of detail. Thus, even if we ignore the impact of irrational agents in a model, using small-scale Brownian motion models to assess derivative movements will fail to resemble reality, simply because such models operate on a level of detail that fails to capture aggregate behaviours accurately.
Question: is looking at decision-making errors in complex systems too fine a lens to provide any understanding of larger-scale robustness?
Third, as Nassim Taleb has pointed out many times, complex systems are themselves dominated by big events, which Goldenfeld and Kadanoff call ‘intermittency‘ – “a ubiquitous feature of dynamical systems”. Such jumps, when modeled in complex systems such as fluids, are well-described exponential probability distribution, and are poorly described by Guassian forms. I’m told that the majority of financial models rely on Guassian distributions, thereby significantly underestimating the likelihood of large shifts in a given system (G+K state that a 6-sigma event has a chance of 0.000000001 of occurring under a Guassian case, whereas with an exponential distribution the same event has a probability of 0.0025 – a massive difference). Regardless of how such big events are formed (I’m now investigating “passive scalars”) misinterpreting the probability of major discontinuities by this many orders of magnitude is probably a bad omen for robustness.
Question: what is it about the interactions in complex systems that shift the probability distribution away from normal? What is an approximate exponent in the distribution describing the global financial system over the last 100 years?
Fourth, it is thought that complex systems do not exist in equilibrium. This perspective forms part of the base of a field called “complexity economics“, which the Santa Fe Institute looks at (among other things – thanks Gareth!), and one of the key issues here is the lack of equilibrium in markets which much of finance assumes. Arthur argues that one of the reasons for this is that actors in markets “cannot accurately assume or deduce expectations but must discover them”. He further argues (using simulation data) that the speed at which these hypotheses are updated is key – with a fast-updating system (i.e. one where investors are continually adjusting their expectations of the market based on experience and data), the market develops a “rich ‘psychology’ of divergent believes that don’t converge over time”, and rogue expectations arise from time-to-time that cause bubbles and crashes. Arthur’s concluding point is that understanding markets using this kind of out-of-equlibirium formation will not only help understand the randomness that markets display, but also bring “an awareness that policies succeed better by influencing the natural processes of formation of economic structures, than by forcing static outcomes”.
Question: do we know enough about the operation of the global financial system to positively influence formation processes? How will we know when we know enough?
While complexity economics takes a more realistic view on how financial markets actually operate, I’m now looking into how it incorporates the issue of bias of individual actors into account (i.e. not just expectation-deducing actors, but erroneously-expectation-deducing actors). I’m worried by the fact that the Santa Fe Institute is publishing news that physicists can model crashes in financial markets. Given a realistic model of the financial market as a complex system with feedback loops, wouldn’t the knowledge of such a model influence reality enough to make the model obsolete? A curious conundrum. Comments and resources welcome as I plunge into a new field in search of better questions.