While it is clear that councils are under more pressure now than at any point since the financial crisis, insufficient attention has been paid to how the financial health of local government is measured. This has undermined trust between central and local government in England and discredits the centre’s ability to respond to financial challenges, writes Jack Shaw.
More authorities have introduced emergency spending controls in 2023 than in any year since the Local Government and Finance Act was enshrined in legislation in 1988. And in 2024 authorities could receive Exceptional Financial Support – which enables them to address immediate pressures by capitalising day-to-day revenue cost – in greater numbers than in any year since 2009-10. During that period, 50 authorities requested support following the collapse of Icelandic banks where hundreds of millions of local government pounds were held.
There are multiple metrics that indicate an authority is in duress; the percentage of reserves it possesses relative to its annual budget, the size of its debt, the cost of servicing that debt and historic overspending patterns are all routinely cited.
In determining the quality of governance of a local authority, current indicators include the level of scrutiny applied to budgetary reporting, the strength of internal processes, workforce churn, the capacity, capability and quality of political and corporate leadership, the mutual trust between them and whether external scrutiny is welcomed. A third dimension used by the Department for Levelling Up, Housing & Communities (DLUHC) is ‘soft intelligence’, which includes concerns raised by the public or Members of Parliament. In the context of this broad set of indicators, the Government has acknowledged that there are “no clear or unequivocal quantitative measures” to assess the state of authorities.
The Financial Resilience Index developed by the Chartered Institute of Public Finance and Accountancy, and the Office for Local Government’s (Oflog) new Data Explorer, as well as internal modelling by the Local Government Association (LGA), all represent different attempts to understand the financial health of particular authorities.
A notable difference has, however, emerged between the assessments coming from Whitehall and the local government sector. Michael Gove told the Levelling Up, Housing & Communities Committee recently that the LGA’s analysis suggesting that two dozen authorities could issue a Section 114 in 2023-24 was an “overestimate”. He went on to suggest that some authorities were “crying wolf”. And while there is overwhelming evidence of systemwide dysfunction, this criticism requires further investigation.
The LGA’s forecasts have been queried previously. When former LGA Chairman Lord Porter suggested in 2015 that “12 or 14” authorities are “very close to the edge”, the BBC reported at the time that Westminster was “pretty cynical” about the suggestion because “they heard the same in 2010”. The first occasion on which an authority issued a Section 114 after the Conservatives came to power in 2010 was in 2018.
Similarly, a report from the Audit Commission in 2014 indicated that auditors were concerned that 16 per cent of upper-tier authorities were not “well placed” to balance their budgets in 2015-16. And, drawing on the annual Local State We’re In survey by PriceWaterhouseCoopers (PwC), in 2014 seven in 10 sub-national decision-makers held the view that some authorities would not be able to meet their statutory duties by 2017.
In each example, those concerns did not turn out to be accurate. This says much about the difficulty of measuring the financial state of local government. Assessments based on survey data run the risk of reflecting what some see as the ingrained pessimism of local decision-makers.
The National Audit Office recognised that there was scope for misinterpretation a decade ago, when it criticised the Government’s ability to forecast, noting: “if there is a financial failure within an authority, it is likely to be clear to the Department immediately, but its information on whether an authority is close to failure is weak.”
Yet the reasons why judgements about the financial health of the sector are fraught remain underexplored.
One reason is that, given that what constitutes ‘failure’ is determined by a range of qualitative and quantitative indicators, existing data tells a partial story. Equally, the rapidly changing macroeconomic environment has also made predictive approaches more difficult over time, and single-year settlements have hindered longer-term financial planning which might have otherwise given authorities – and the wider sector – more certainty. Labour has committed to introducing multi-year settlements if it wins the General Election.
Another reason is that, given the uncertain boundaries between what constitutes ‘discretionary’ and ‘statutory’ services, authorities have proved remarkably efficient at reducing services in order to reconcile rising demand with fewer resources.
Combined with the decision to close the National Audit Commission in 2015, the broken auditing regime has also deprived authorities of external scrutiny on an annual basis, and the Government of another lens through which to understand the health of authorities. While Oflog is now occupying some of the space which has been vacated by the Commission, the auditing regime will continue to be dysfunctional in the short- and medium-term.
Further analysis is required to understand in more detail both the impact of the so-called ‘industry of inadvertent misinformation’ – on both Whitehall and public attitudes towards local government – as well as the measures needed to improve how the financial health of local government is measured. There is a good case for greater collaboration between the LGA, DLUHC and Oflog, all of which gather intelligence on the health of authorities. The terms of reference of the newly announced expert panel responsible for advising the Government on local government finance – which includes the LGA and Oflog – should also include reviewing how the Government measures and engages with “at risk” authorities.
While the Government is rightly reluctant to place additional burdens on authorities, regular monitoring of authorities’ finances – via a standardised and regular temperature check from each Director of Finance – could be a useful addition to the quantitatively rich, but qualitatively limited, set of indicators upon which central government relies. This can provide a sense of longitudinal perspective which is currently absent. The LGA’s quarterly public satisfaction survey, which has run since 2012, could well be a model upon which government could draw.
Finally, practice elsewhere may be useful in the search for a more suitable model for measurement. The Redmond Review (2020) identified New Zealand as a possible exemplar. Authorities in New Zealand are required to report performance against a range of financial indicators specified in legislation. These indicators have clear thresholds, lending themselves to a binary audit opinion which may be more ‘legible’ for central government.
While it would be a mistake to imagine that this issue is a purely technical one, given the political character of public sector financing, giving consideration to different approaches to measuring the health of the sector can enable policymakers to coalesce around a shared diagnosis and help them find cross-party agreement on the solutions required.
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.