University reputations and finances often hinge on their position in global ranking tables. Students use rankings to quickly identify the best place to study — or, at least, what might be perceived to be the best by any future employer. Even a small shift in rank can affect how many students apply to a university, altering the income from tuition fees (R.-D. Baltaru et al. Stud. High. Educ. 47, 2323–2335; 2022).
The future of universities
And governments love the simplicity of rankings. Many will fund their citizens’ overseas study only at institutions that are high up the listings. National investment initiatives — such as Russia’s 5-100 Academic Excellence project and Japan’s Top Global University project — often focus on universities that have a chance of making it into the upper echelons of the rankings. The UK government offers its High Potential Individual visas only to candidates who have studied at highly ranked universities.
Such reliance on rankings means that universities are shaped not by the needs of society or by innovations driven from inside the international higher-education community, but by unappointed third-party ranking agencies.
The indicators used by some of the dominant flagship rankings don’t capture the full range of qualities and functions of higher-education institutions. Each agency uses a slightly different ranking method, but all typically focus on a narrow range of criteria. These are centred heavily on publication-based measures, such as citations, and on reputation surveys.
The consequence is that most of the world’s universities tend to pursue one flavour of ‘excellence’, which looks rather like the old, wealthy, conservative, research-intensive institutions of high-income nations.
Inside our university’s mission to pivot to research
Meanwhile, universities are facing a series of problems — from diminishing public funding and trust, to decreasing curriculum relevance in a rapidly changing job market and the need to demonstrate real-world impact from research. There is no shortage of ideas about how to reshape universities in response to these challenges, but the dominance of rankings as a measure of institutional success means that universities lack incentives to try. Many fear that stepping away from the status quo might result in a drop down the tables, making it harder for them to attract funding and talent.
Scholarly communities and universities must push for change. Here, I outline how.
A flawed system
In my view, flagship global rankings over-rely on the data sources that they have easy access to: publication data or survey data that they collect themselves. (The Nature Index, produced by Springer Nature, ranks universities solely using contributions to research articles published in natural-science and health-science journals.) In many rankings, evaluations of teaching are based on flimsy proxies, such as staff-to-student ratios or the number of alumni with Nobel prizes. Most rankings place little to no weight on open-science practices, societal impacts, outreach or efforts to improve diversity, equity and inclusion.
Ranking indicators are also weighted in a variety of ways without clear justification. For example, a ranking might allocate a 20% weighting to citations involving faculty members and only 5% to employment outcomes. Rankings are also presented without error bars, even though the data used are imperfect.
Efforts to push back on narrow, publication-dominant forms of assessment have mostly put the onus on universities to change how they evaluate their staff and departments. Many universities have risen to the challenge. Narrative CVs and biosketches — accounts written by researchers that highlight the full range of their contributions — are becoming more common. In European universities, the development of templates to elicit evidence of a broader range of contributions from faculty members, known as Career Assessment Matrices, is expanding.
But there is a limit to how far institutions can move away from citation- and publication-based assessments, if they continue to be judged on these measures by global university rankings.

Students use rankings to choose universities.Credit: Getty
In the past three years, several groups, including the Union of Students in Ireland and the United Nations University think tank, the International Institute for Global Health, have appealed for universities to escape this stranglehold. They have called on institutions to stop supplying data to the rankings, which some, such as the University of Utrecht in the Netherlands and the University of Zurich, Switzerland, have done. And the groups have asked universities to stop promoting their rank and to reduce how much they consider rankings of someone’s previous institution when making decisions, such as which staff members to recruit. The groups also endorse the More Than Our Rank initiative, which encourages institutions to describe their many achievements, activities and aspirations not captured by the rankings, through a narrative statement on their web pages. (I chair the INORMS Research Evaluation Group, which developed the More Than Our Rank initiative.)
These are valid recommendations, but asking individual universities to take responsibility will not lead to global reforms in how university performance is defined and assessed. To achieve this, a three-pronged solution is needed.
Call out current rankings
The higher-education sector should collectively — and vocally — agree that the current rankings are not fit for purpose. It might seem unlikely that institutes currently at the top of the rankings, mostly located in Europe and the United States, would call out a system that benefits them. But geopolitical changes should give pause for thought. Chinese and Indian universities are taking more of the top spots in the rankings than before, with UK, US and Australian institutions in decline. If those currently at the top wait too long to speak out, they might soon find themselves lower down the ranks, with less clout to drive the reforms that would serve all institutions.
How universities came to be — and why they are in trouble now
The call for change should involve an education campaign aimed at students and policymakers, who rely on rankings for decision-making. This should be led by an independent body that is governed by experts from the international higher-education sector, many of whom have already expressed concern about the harms of global university rankings (see, for example, go.nature.com/4hy1kq9). The goal should be to help consumers of rankings to understand that ‘Which is the best university in the world?’ is not a useful question. ‘Which university might be best for me, given that I care about X and Y?’ is a better question — but one for which current measures are unlikely to provide a good answer.
The campaign should note that good assessments need to be nuanced and contextualized, and will take time to digest. Just as ‘the best’ researcher cannot be identified by the single number that makes up their h-index, ‘the best’ university cannot be determined by the single number that makes up their rank. This message might not be popular, but it is a crucial one.


