Posted by: Nicholas Davis | March 25, 2009

Linking complexity and bias

A few questions jumped out at me while reading the New York Times article “Risk Mismanagement“, which I came across on the EconTalk page that contains NNT’s interview. The main ones I want to explore in detail are these:

  • Why do people find it so hard to accept complexity?
  • Which biases are acting to blind us to the uncertainty inherent in complex systems?
  • Why are we drawn to reliance on error-prone models, often getting it precisely wrong rather than vaguely right?
  • How can we get “on the right side of bias” in dealing with complex systems?

In essence, this is the same idea that forms the basis for this blog: how we can better manage uncertainty through understanding why we find it so hard to accept.

I believe that a focus on elements that prevent us from engaging usefully with complex systems fits well with what Nassim Nicholas Taleb has been talking about in interviews recently: while he is not focused on the issue of “expert error”, implicitly much of what he talks about requires finding ways around the “foolishness” or “propensity to be fooled” of people engaged in the financial system (although I accept that not everyone will be inclined to view complexity in the same way that he does!). Perhaps this solution set might be divided into two: 1) structural shifts that would create conditions of increased robustness through limiting (i.e. governing) already-identified fragility-increasing behaviours, and 2) bias-correcting policies that would correct incentives and “nudge” behaviours away from current or future errors that might individually or culminatively compromise the system.

On the structural side we definitely need structural guidelines that recognise that the global financial system is complex and therefore to make it more robust, just as a plane has to be designed for some seriously rough storms and other contingencies besides. This could involve (as NNT has suggested) limiting both the size of institutions and the leverage they can build up, to prevent a small number of highly leveraged players or institutions from pulling down the whole system when something goes wrong (i.e. limiting damage from ‘individual’ errors like LTCM). Other examples here could be mandated slack in the system, pre-prepared plans for liquidity provision etc.

But it would also be useful to decrease “pilot error” for those who work within the system, for two reasons: a) to prevent the sum of decisions across the system from creating structural changes which increase fragility (i.e. preventing re-optimization through creeping changes), and b) to further hedge against LTCM-like errors which, while perhaps not systemically dangerous under the right structural framework, are nevertheless not to be desired (except as regular lessons, perhaps). Here, we need to be aware of aspects such as ensuring incentive structures remain systemically sound, and understanding how decision-making processes, cultural norms and the influence of biases and emotions in behaviours across the buy-side, sell-side and regulators contribute to the issue of robustness.

I now believe that these two issues re intrinsically linked – importantly, we need to understand how people regard complexity and why they find it hard to accept the uncertainty that it creates in order to design adequate responses – both structural and bias-correcting. This is almost the first step, and leads to those four questions above.

As an aside, my boss asked me a very similar question today, prompted in turn by a great conversation with Darko Lovric last night. She wanted to know what the mental process people went through when they have to give up a core element in their everyday worldview, such as belief in a deity. When people shift their outlook on the world in a fundamental way, how does the brain cope? What replaces the lost viewpoint? Is there a mourning process?

Perhaps understanding the psychological and neurological processes of letting go of a central viewpoint will help us understand the process that people need to go through in moving from thinking deterministically to thinking in terms of scenarios, uncertainty or complexity. Darko, I’d love your thoughts in particular on this last point.

Advertisements

Responses

  1. Thanks for the invite, happy to contribute what I can. Psychology of belief is a large topic, and best I can do here is weave together a few disparate insights. Firstly, most people do not take their beliefs very seriously, and do not have particularly structured minds, which means they can just shrug off evidence that threatens their beliefs and move on as if nothing happened. Various cognitive biases also buffer us against needing to do this too frequently. I am of the opinion that we actually relatively rarely question our beliefs, which is why science has to be taught and practiced. Essentially, people have to be well cornered with damning evidence about something they find important before they change their minds consciously. How do we then change our beliefs? There seems to be a sort of a cognitive drift, which ensures our beliefs are updated as we grow up, change our roles and our surroundings – this same process also ensures that our minds are unstructured, as we play our different roles in society (just note that very few philosophers have really applied their beliefs to their everyday lives, in their full scope).

    Human mind is simply not built for consistency – it is more of a dream machine, with only rare evidence-based tests. Which is why most of our beliefs and fantasies are actually solutions to our emotions, coping mechanisms. Research on daydreaming shows that our fantasies are predictable responses to our daily challenges, as are our beliefs. And this leads us to large worldview changes.

    In general, fundamental shifts in worldview happen through an essentially emotional process, in which rising uncertainty and frustration fuel the need for a change of beliefs. We might ignore contrary evidence, but negative emotions and thwarted desires are much harder to ignore. As are promises for positive emotions and future rewards. Same goes for our feelings of control, in which loss of subjective sense of control makes us more likely to believe in magical explanations, or any explanations for that matter.

    Crisis is the domain of psychotherapy, where people essentially go to find new beliefs (and habits) that make them feel better and achieve their wishes. It is also a point at which brainwashing can occur, and cults engage in their grim business.

    And this might point us to the answers regarding uncertainty – essentially, uncertainty threatens our sense of control and the beliefs we need to believe in order to feel safe and assume that world is manageable. It is well known that depressed people have a more realistic sense of the world. In other words, we need our illusions of certainty and control to be happy and have a happy ego…

    We are therefore left with a paradox of uncertainty – as soon as a crisis hits that makes it obvious that our beliefs are much less certain than we would like, we pour all our energy into finding a first bit of certainty we can. It might well be that people are more likely to accept fundamental uncertainty when they feel quite certain. In this sense, Taleb’s book is a good example in which in a relatively non-threatening way problem is first created, and then solution offered that actually affords more “control”.
    However, problem is that such control is not the control of certainty we are used to, so people will be naturally inclined to “solve the problem of uncertainty” and deal with Black Swans, potentially through new risk management techniques.

    Rare individuals are able to accept full uncertainty, and this potentially includes the very religious (God is a neat way to preserve some level of control without actually having it) or the ones who do not take life too seriously (humour is a wonderful coping mechanism). Another potential is to show how scenarios actually afford us a different type of control over the uncertain future.

  2. Thanks so much Darko, very valuable contribution to the discussion!

    I like your link between the psychology of uncertainty and the need to maintain an “illusion of control” – Sypros, Anil and Robin talk about this in the introduction to Dance with Chance. I also think you hit the nail on the head with the fact that we replace one form of control (illusory, superstitious) with another (conscious anticipation, preparation, getting on the right side of “luck”). Scenarios is one way of doing that.

    I’m interested, however, in your statement that the human brain is not built for consistency – I always thought that we were inclined to see our selves as consistent (even if that is a misperception of ourselves), and that is shifts in vocab, posture, thinking etc and change other elements of our behaviour (we spoke on this the other night). How can the dream machine and the consistent self-image be reconciled?

  3. Well, you are very right, we are indeed inclined to see ourselves as consistent, but, as you allude, this seems to be an illusion based on memory and abstraction – we tend to give quite general descriptions about ourselves, our choices and our roles, and we tend to stay faithful to our habits and addictions. Furthermore, we actively work on ourselves to allow only behaviour that maintain consistency – this requires effort.

    Therefore our personality is quite strongly fixed, i.e. we have a certain degree of inertia when faced with new environments – resistance to change and novelty (which increases as we age).

    But, behind such broad consistency lies a mass of perceptions, thoughts, beliefs and acts of quite astounding variety. And this is indeed by far the largest part of the mind. It is also potentialy the more powerful part, as such loose structure allows for novelty, error, subtlety… and hence creativity.

    If you think of all the ways in which human mind is different from a computer, it becomes obvious that our strenghts are not in being structured and consistent…and this is not solely due to varied environment, but the sheer complexity of the mind – same input will provoke subtly different output. We might be inclined to control and correct such otuput, but our strength might lie exactly in those small (and sometimes large) divergences and imprecisions. Perhaps there is a much more elegant systems thinking view of explaining this?

    Of course, you could argue that such a complex system is also eventually consistent, but, I think the point is that our view of consistent selves and what actually happens in our mind diverges quite significantly.

  4. Interesting. We should chat over our next beer about what techniques exist to help people recognise and come to terms with uncertainty & complexity, without requiring to create a real crisis (or having extensive, expensive therapy). The trick will then be to embed the “alternative” mode of control (it’s very zen – you have more controlling by letting go of your illusion of control) and preventing old habits come back.

  5. […] and reframing, part 2: emotional connections – interesting stuff related to some earlier posts (Foresight […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: