Blog

Policy Failure and Learning – Three Lessons from the Social Sciences

In policy-making, failure is not necessarily a bad thing and learning not always good. Claire Dunlop, Professor of Politics and Public Policy, University of Exeter, gives three examples from research in the social sciences on policy-making and humility, how learning can be done well, and how states deny failure.

Blackboard from Unsplash

There are well established headlines in the social sciences about policy failure, how policymakers perceive it and how it intersects with policy learning. As you might expect, there are many definitions of policy failure and policy learning in the academic literature. And, of course, they are both concepts that have a subjective, ‘eye of the beholder’ quality.

Here, my point of departure is a policy fails even if it is successful in some minimal respects, if:

  1. It does not fundamentally achieve the goals that proponents set out to achieve
  2. Opposition is great and/or support is virtually non-existent’ [1]
  3. Learning fundamentally concerns the ‘updating of beliefs based on lived or witnessed experiences, analysis or social interaction’ [2]

Yet, note: I do not assume any fixed logic for the above. Failure is not necessarily a bad thing and learning not always good.

1. Thinking fast and the difficulty of regulatory humility

We know a good deal about how learning and failure intersect at the individual level. Decades of behavioural economics, and specifically the experimental body of work of Daniel Kahneman and Amos Tversky (2000), demonstrates the impact of cognitive shortcuts that encourage automatic responses to stimuli and involve very low quality learning processes (if any at all). Policy-makers have woken up to this – most clearly demonstrated in the rise of ‘nudge’ technologies popularised in the United States (US) by Thaler and Sunstein (2008) before making it to the UK under David Cameron’s coalition [3].

We hear a good deal about the need to design interventions that go with the grain of citizens’ priors but far less is said about the impact of mental shortcuts and guesswork on policy-makers’ thinking and behaviour.

Some of my own work examines these and, specifically, the manner in which so-called ‘fast thinking’ [4] bolsters policymakers’ ‘illusions of control’ [5]. Put simply, policymakers lack regulatory humility and tend to over-reach both in the policy problems they take on and the scale of the solutions they embark on. Perhaps one of the most famous examples in the UK is the Poll Tax [6].

2. Failure as Learning in the ‘Wrong Mode’

The second set of findings address social scientific analyses of the group or meso-level. This takes us to the engine room of policy-making (and indeed policy analysis) – where experts, organised interests, social representatives and courts shape policy design and implementation. The first thing we know is that advisory committees, bargains with interest groups, consultations with social actors, and court proceedings all generate a considerable amount of policy learning in the system. So, we can put to rest a major misconception that there is no learning in policymaking – in fact there is a good deal of it engineered in.

Why then is there so much policy failure? Learning happens but it is not always a ‘good thing’. Specifically, studies demonstrate the problem is not a binary one where learning is on or off, but rather it is question of whether or not policymakers are learning in the ‘right mode’.

Studies of failure reveal learning happens but can generate pathological policy results either because:

a) lessons are simply never applied (think public inquiries)
b) lessons generated by one group of actors – e.g. experts – have not been balanced out by those generated in other social arenas
or finally, c) the lessons are irrelevant because the wrong types of policy actors are involved from the start.

3. Denying failure and the problems of seeing like a state

The third finding revealed by the social sciences concerns the macro level – society, culture, institutions – and the failure/learning dynamics that become embedded in cultural identities and policy histories. The empirical evidence suggests societies struggle to fully acknowledge and, therefore, deal with failure. Such collective denial of failure is not the result of some cultural ‘bright side’ way of thinking. Rather, it is governed by social caution and fear reinforced by the state. This disposition is well-captured in the idea of ‘too big to fail’ which emerged in common parlance in the wake of the financial crisis. Specifically, this expression was used as the shorthand explanation for the conscious and public denial of failure. Despite the abject failures of risk analysis and regulatory policy, not bailing out banks would entail social fallout that was too big to contemplate. The result is very little learning indeed, and certainly no double-loop, fundamental reform in the regulation of the financial sector.

How can we explain this? Again we can point to the cognitive biases facilitative of blame avoidance. But, this is not sufficient to capture the scale of what is going on here. We need to interrogate how the state sees things or perhaps more suitable, how societies allow their states to see. I am thinking here about the work of Donald A. Schön (1973) and James C. Scott (1999). These two very different thinkers are united by an interest in the state as a machine that fights (a hopeless) battle to uphold policy legibility and stability. To see like a state is to reify and internalise a very narrow order of things [7] characterised by a fight to remain the same (what Schön calls ‘dynamic conservatism’) that prevents acknowledgement of the natural and logical reality that things go wrong and, as such, inhibits learning [8].


This article forms part of our policy success and failure series of blogs and the Bennett Institute report Policy Success and Failure: Embedding Effective Learning in Government.


[1] McConnell, 2015
[2] Dunlop and Radaelli, 2013
[3] John, 2014
[4] Kahneman, 2011
[5] Dunlop and Radaelli, 2015, 2016a, for the original experiment see Langer, 1975
[6] See Dunleavy, 1995 for a well-known analysis of this and other disasters
[7] Scott, 1999
[8] Schön, 1973