- David Oliver, consultant in geriatrics and acute general medicine
- davidoliver372{at}googlemail.com
Follow David on Twitter @mancunianmedic
On 9 September NHS England published its first quarterly league tables of NHS provider trusts’ performance,1 as promised in the NHS 10 year plan.2 There are scores, tables, and rankings covering specialised and general acute hospital trusts, as well as community healthcare, ambulance, and mental health and learning disability trusts. All trusts have been allocated a segment—from 1 for the best performing trusts down to 4 for the worst—and have an overall ranking in the table. The higher performers have the lowest overall score, with 1.0 the lowest and 4.0 the highest possible.
Only a fifth of general acute trusts are in segment 1 or 2, as opposed to two thirds of specialised acute trusts. Trusts of all types have seen a big fall in the numbers rated highly since previous versions of the NHS oversight framework.34 A big reason for this is that any trust in financial deficit can’t be any higher than segment 3, no matter how well it performs on all other performance domains—which immediately puts several excellent providers well down the table despite “good” ratings from the Care Quality Commission (CQC) and high patient and staff satisfaction. This makes the table look like a governmental instrument of control and blame rather than a sincere attempt to help the public understand the quality of care and access to services in their local area.
In acute trusts, the metrics used to construct the league table fall under six domains15: waiting time metrics for elective care, cancer referral and treatment, and urgent care; effectiveness and experience of care (including the results of the CQC’s inpatient satisfaction survey); patient safety (including the CQC’s safe inspection score); workforce wellbeing and engagement (including satisfaction scores from the national education and training survey); finance and productivity; and improving health and reducing inequality (including elective waiting list growth for under 18s). There are separate metrics for other types of trusts and for integrated care boards.
Open to scrutiny
So, is any of this problematic? I have no personal objection to the metrics used to construct the table, and they’re nearly all well established, routinely collected data points. Plenty of other measures are missing, but the burden of additional data collection and reporting could become overwhelming and distracting. And at least there’s a nod to patient and staff experience, not just access and waiting times.
But other authors have made technical critiques of how the data are weighted and aggregated, saying that the process and rankings are “not robust, meaningful or fair.”6 And the methods used to calculate changes in “productivity” over time are opaque and hard to trust as a single point of truth. NHS England seemingly wants a single accepted dataset, not arguments about its validity. The health secretary, Wes Streeting, has dismissed expert health policy think tanks for “elitist nonsense” in caveating the data.7 But the methods should surely be as open to scrutiny as he wants NHS care providers to be.
There’s also no attempt to contextualise performance indicators in relation to the demographics, broader inequalities, or urban-versus-rural nature of local catchment areas. Hospitals that struggle often do so because of where they are. Trusts can find it hard to attract staff to that area because of geography or housing costs, competing employers, or primary and community health services. Indeed, if we look at the percentage of hospitals in each tier by region, the differences are stark. Context matters as a reason for variation—not a “get out of jail” excuse. And hospitals sit in wider local health and social care systems involving numerous agencies, which is perhaps where the performance focus should sit.
Comparing general hospitals with very specialised tertiary providers, which stick to limited lines of service and often don’t provide urgent or emergency care, is also deeply unhelpful and gives more heat than light.
Undermining goodwill
We should also ask what the new look league tables are actually for. Plenty of performance data and regulatory ratings were already in the public domain. Any evidence that merely naming and shaming poorer performers—or putting them into some punitive central “performance regime” or threatening executives’ jobs—will magically help them to improve is lacking.89 Indeed, it could further worsen morale, hamper recruitment and retention, and undermine goodwill from the local population.
The argument that this bright new transparency initiative will help patients choose the service they want is a mirage. Most people in England have a maximum of one or two trusts that they use for acute, community, or elective care, as well as one regional tertiary centre. They won’t be shopping for care providers.
Finally, these league tables risk recreating a phenomenon seen repeatedly in recent decades: managers being driven to hit certain performance metrics at the expense of all other priorities. Avoiding such distortion was a key lesson of the Mid Staffordshire public inquiry.10 The former health secretary Jeremy Hunt argued this in a recent article and said that the league tables placed far less emphasis on patient safety than CQC reports.11
Given that the league tables will be in place for at least the rest of this parliament, perhaps the best we can do is use any insights and comparisons of similar providers to focus our approaches to local improvement.