Posted by: Nicholas Davis | October 22, 2010

Some thoughts on “Mental model risk”

One use of scenarios is to “challenge mental models” – to present alternative representations of the way the world works and outcomes that may result in such a way as to enable some form of strategic testing. The idea is that too often people become trapped in a single, often narrow of “the way things are”, and therefore fail to understand shifting contexts.

The link to risk here could be called “Mental Model Risk” – the risk that a flawed understanding of reality will lead you to make a colossal error of some type. “Model risk” of course already exists in financial terms: it is the risk that your model is insufficiently representative of reality to be useful, leading to unanticipated costs or losses when relied upon. Mental model risk is a similar mental concept that occurs when your understanding/appreciation/perspective of the system is so out of whack with real dynamics that your interventions and behaviours do more harm than good. I haven’t yet thought this through systematically, but in medical history the practice of “blood letting” could be an interesting example of mental model risk (it looked good in certain ways by lowering body temperature, but did a lot of damage over all). Keeping interests rate low leading up to the financial crisis is another example, as admitted by Greenspan in his congressional testimony (or was it to the Senate? Anyway, that great video where he says his mental model of the world was false despite working for 30 years previously). If you intend to somehow manage risk on whatever level you have to appreciate that your response is based on a model of the system and that model could be wrong, particularly in a dynamic, complex environment.

What is interesting here is to appreciate that some forms of model risk leads to mere inaccuracy (ie newtownian physics is not accurate under certain conditions such as speeds close to light speed or intense gravitational fields and so for long-range space flight you need to use einsteinian equations or you will be off course by a certain percent). But in other domains, such as social, political and economic spheres, model risk can lead to drastically different outcomes (ie in a complex system where small errors in inputs lead to massively divergent outcomes thanks to feedback loops and interaction, asymmetry etc.) And with global risks being complex and interconnected, that makes model risk in the area of global risk response a fundamental issue.

Everyone is exposed to this kind of risk, even those who use scenarios to try and correct for it. Scenario thinking attempts to offset model risk by challenging assumptions and expanding perspectives. However this is extremely hard to do – many sets of scenarios are simply developed from a European- or North-American viewpoint that implicitly relies on realist political theory and mainstream economic thought as its base assumptions, simply because the developers and the participants hold those kind of cause-effect assumptions as given. So scenarios can help, but even then there is a constant need to expand ever-deeper-held assumptions in relation to how scenarios themselves (and hence, a set of simulations of reality) operate.

To start pulling these ideas together, “Mental Model Risk” is important for three reasons:

1) Risk creation: When you misunderstand the world or fail to appreciate its complexity etc, you can create risks via the unintended consequences of seemingly benign actions.
2) Risk perception: your mental model of the world could make you focus on the wrong risks, underestimate critical issues, or prevent you from seeing awesome opportunity in situations.
3) Risk response exacerbation: When you attempt to respond to a risk using poor models you can make it worse or shift it elsewhere – unintended consequences in risk response.
4) Meta learning risk: bad mental models can also blind you to the fact that your mental models are wrong, trapping you in a cycle of misperception and negatively compounding factors with no way out. Clay Shirky in Cognitive Surplus mentioned that Danny Kahnemann calls this “theory-induced blindness”. Awesome phrase.

Mental model risk is particularly scary because mental models are ingrained in us via our social, cultural and educational context and it is actually very hard to see the world in different ways. It is also incredibly frustrating when it comes to correcting for mental models because it often seems that there is no actual “real”, “true” or even useful model of a complex economic, social or political system since a) we can’t ever really understand it and b) it is so complex that all models are fundamentally wrong in enough ways to create huge amounts of risk (albeit in different ways). Finally there was a study recently looking at how a major proportion of the population lack the ability to know when they lack certain abilities, with disastrous consequences. Imagine not knowing that you don’t know how to swim when someone else falls in the water.

Mental model risk is really therefore a “meta” risk – it’s the risk of us approaching risks in such a way as to compound their impact and likelihood instead of mitigating them. It’s the risk that (for example) our ideas of the benefits of democracy and freedom will lead us to invade a country without considering the possibility of local anger, insurgency and terrorism. It’s the risk that our ideas of the benefits of increasing economic efficiency or trade openness will lead to fragile markets or moral hazard.

How might you deal with mental model risk? Some ideas:

1) You acknowledge that you are exposed to it constantly and understand the behavioural and psychological elements that influence it.
2) You appreciate and understand the relevant spectrum of mental models and what that means in economics, politics, social policy etc, and you do so at a non-pejorative level that is more analytically useful that “left v right” or “keynsian v chicago school”. You avoid polarization, reductionism and standard categorization and consider what influences nuance amongst these mental models.
3) You attempt to understand what your own mental model is, what your explicit and implicit arguments for it are, and what the counter-arguments are. Most importantly, you are explicit about the assumptions of your model and what their limitations are.
4) You actively look for disconfirming data and exceptions to combat confirmation bias.
5) You consider different fundamental structures for your mental model as well as different values/variables within the model. You run simulations that are designed using model-independent criteria of success (and you examine your criteria of success in the same way you examine the assumptions of your model) and then create one or more “red teams” to challenge your best current model under realistic conditions
6) You change your model when a better one comes along, based on your independent criteria for success.
7) When you make a system intervention of any kind, you factor in model risk – you consider what the results of the intervention say about the usefulness of the model itself, as well as the effectiveness of the intervention. You therefore are prepared to criticize the model rather than attributing failure to issues such as a lack of degree (we just need more troops) or operational errors (we timed it wrong, the implementation was poorly managed etc).

In addition, there are different aspects to model risk. There are errors in specifying the elements of the system (its boundaries, relevant scope etc) that has implication for interconnectedness. But there are also errors in specifying the dynamics of the system. People think in linear terms, when most systems operate in geometric or exponential factors, and often display power law behaviour. Every time someone uses techniques from fire management or epidemiology as an analogy for controlling financial risk, they are drawing on mental (and mathematical) models of systems with different properties. We have to consider the differences and similarities in the dynamics of these systems in order for them to be useful. The value is in contrasting, testing and widening perspectives, not necessarily in importing techniques which could be useless or harmful in a system which has a different structure and therefore dynamic. Managing a pandemic involves far different network-based contagion factors than managing a financial crisis, even if superficially they look similar.

Finally, even the concept of “resilience” in the context of risk assessment and mitigation involves model risk – some people see the goal of resilience as a way of reducing uncertainty to enable continuous operation by current champions in the existing economic and social system, implicitly assuming that massive disruption is bad in every way. However change creates opportunity and resilience can be defined as the ability of the system to learn and evolve in the face of volatility, thus privileging flexibility as much as robustness. When thinking about resilience we need to ensure that we examine the mental models that lead us to think it is a good idea!

Anyway, this post is far too long now but I haven’t had the time to write a shorter one, as Twain / Eliot / Pascal might have once said.

About these ads

Responses

  1. […] Some thoughts on “Mental model risk” (nicholasjdavis.wordpress.com) […]

  2. […] Some thoughts on “Mental model risk” (nicholasjdavis.wordpress.com) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: