My esteemed colleague Andrew Bishop sent our team Malcolm Gladwell’s 1996 New Yorker article “Blowup” this afternoon. If you’re interested in complex systems, technology and risks it makes for an interesting read. Gladwell draws on Charles Perrow‘s work on “Normal Accidents” to discuss the risk management challenges posed by modern, complex systems such as spacecraft and nuclear power stations, and concludes that we need to simply need to come to terms with the fact that “the potential for high-tech catastrophe is embedded in the fabric of day-to-day life”.
I disagree with this conclusion, although not because I think that there is a magic bullet in risk management (I don’t). Instead, I believe we need to think about why the fabric of day-to-day life rewards the complexity that in turn creates the risk, and try and solve for this. In some cases, that will mean changing the fabric by moving away from certain practices and technologies.
Coincidentally, over the weekend I spoke to a friend of mine, Denise Caruso from Carnegie Mellon, about Perrow and his ideas. She mentioned that she interviewed Perrow (who wrote the book Normal Accidents in 1984) a few years ago. She said that Perrow felt many people had misinterpreted his concept of as implying (as Gladwell does) that we need simply to accept the inevitability of catastrophic risks in technological systems as “normal”. Perrow emphasized that in the end we choose to work with complex organizations and complex physical systems that can have catastrophic impacts without appreciating the true downsides – in Warren Buffett’s words we “pick up nickels in front of a bulldozer”. He argues that we should be willing to discard complex technologies and forms of organization that offer short-term gains yet threaten catastrophic failures in the medium- and long-term.
Perrow’s arguments around the dangers of complex systems are supported by Joseph Tainter’s thesis that complexity offers declining (in his examples, social) returns over time – Tainter’s point is that the marginal value of complexity becomes negative at some point and in fact precipitates social collapse. Part of the reason why this occurs can be linked to Perrow’s characterization of multiple, unanticipated failures in tightly coupled systems – keeping such failures at bay requires significant amounts of energy (which may not always be available). Both Tainter and Perrow argue that, at some point, an additional unit of complexity (for want of a better measure) is a bad thing, since it increases the threat to the entire system.
So why can’t we simply find or design rational solutions to solve for the challenges of increasing complexity? Can’t we remove or reduce complexity, or invent better risk management systems that can cope? Well, the truly interesting bit of Tainter’s argument is a form of “ratchet” principle that means increasing complexity is far easier than decreasing complexity (in particular within social systems). One possible explanation for this phenomenon is the existence of adaptive agents with entrenched interests within the system. Hence, the response to new perception of risks is most often to ADD risk management systems (or additional bureaucratic measures, redundancies, protocols, fail-safe mechanisms etc) to existing organizational structures, rather than trying to remove or reduce structures, which increases complexity overall. So it’s tough to just “de-complexify”. Clay Shirky riffed on this in the context of business models in a great blog post last year.
At the same time, as Perrow points out, risk management in the face of complexity requires directly contradicting organizational forms – the prospect of catastrophic failures produced by complex systems requires decentralized, adaptive operators that can respond quickly to signals in proximate subsystems, but also highly centralized, routine-driven operators that appreciate the extent of interconnections across subsystems such that local intervention doesn’t create further problem. You can’t have both centralized and decentralized operators, hence a fundamental organizational dilemma exists that makes managing the risks posed by complex systems extremely challenging indeed. Lesson: it’s tough to find organizational solutions to complexity. Worse, seeking to do so will tend to increase its complexity and hence risk.
Which brings me to why I disagree with Gladwell’s conclusion. While I appreciate his addition of risk homeostasis as a contributing factor towards catastrophic risk (and could add it to the reasons why returns to increasing complexity can turn negative), I don’t believe the world is faced with a stark choice between accepting the risks of beneficial technologies on one hand and giving up the comforts of modern life on the other, as he suggests. This is a false dilemma that serves only to force us to accept heightened risk because to do otherwise would be viewed as a kill-joy.
In fact, we don’t have to sacrifice many of the technologies that make us safer: as Perrow argues, the jet engine is safer, cleaner and more reliable than the prop engine, and doesn’t increase systemic risk by being used as a substitute. But we should be wary of those technologies and organizational forms whose potential for dangerous interaction with our societies and environment are weighed against small, uncertain or remote benefits (particularly when those benefits are measured in terms of material gain but not true prosperity). We have a choice about whether to sanction projects such as geo-engineering, genetic engineering and even large-scale nuclear technologies. Despite what some people argue, we don’t have to live with large, global banks which far exceed the capacity of national regulators to supervise.
While it is politically difficult to achieve practically, as Perrow points out, we could choose instead to adopt local solutions such as solar power, and have smaller organizations that don’t co-opt political and social power as they increase in size and complexity. We could identify those technologies and organizational forms that offer false efficiencies – efficiency over a short timescale (i.e. without accounting for the cost of potential risks), or efficiency to an end that actually represents very low value (e.g. increased efficiency in distributing unwanted or unneeded goods), and choose alternatives that give us what we need without the attached cost of catastrophic failure. And where we do see the possibility of certain systemic risks thanks to complex, interconnected systems (such as the Internet) but weigh the benefits of enhanced communication and transparency above threats to privacy or intellectual property, we should actively resist the urge to layer it with regulations and safeguards that ultimately only increase the total risk by increasing its complexity. What you don’t want public on the internet, you don’t put on the internet.
And if all that fails, as Clay Shirky pointed out in that same post, the really smart people will look for examples where complexity is already threatening collapse, and find opportunities to take advantage of simpler and fundamentally more efficient ways of achieving even better results. Rather than shutting our eyes to the impending risks of the complex organizations and technologies that define “the fabric of day-to-day life”, a portion of our efforts should go towards looking at how new approaches, technologies and thinking can leapfrog older, complex business and organizational models in order to create more sustainable, simple ways of living. That’s certainly what we should have been doing in the immediate aftermath of the financial crisis. And it’s where the focus should be in Egypt and Tunisia right now.
Addendum: Last June, Perrow contributed an interesting piece to the Energy Collective blog here about why we should abandon Deepwater drilling based on his principle of normal accidents. If you enjoyed this post, you might find it interesting. In fact, you might find it much more interesting than this post, so go read it!