About the Author(s)


Sioux McKenna Email symbol
Centre for Postgraduate Studies, Rhodes University, Grahamstown, South Africa

Citation


McKenna, S., 2024, ‘The resilience of rankings in the neoliberal academy’, Transformation in Higher Education 9(0), a415. https://doi.org/10.4102/the.v9i0.415

Note: Special Collection: Neoliberal Turn in Higher Education.

Original Research

The resilience of rankings in the neoliberal academy

Sioux McKenna

Received: 18 May 2024; Accepted: 15 July 2024; Published: 20 Aug. 2024

Copyright: © 2024. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The multi-billion-dollar university rankings industry purports to offer insights into the quality of institutions, but the extent to which it does so has consistently been refuted. Critics argue that problematic proxies, composite indexing, homogenising effects, and several other issues make them both unscientific and neo-colonial. This article outlines these criticisms and argues that if we are to understand the resilience of rankings, we need to acknowledge the context in which they have become ubiquitous. This article offers the prevalence of university rankings as an example of neoliberalism’s conditioning effects on the sector. It is not enough to demonstrate the problematic nature of rankings; we must also ask the question: what must universities be like for them to support rankings despite repeated evidence of their problematic nature? Answering this question should help us engage with the hold that rankings have over us, and it should also help us to imagine the university we want and need.

Contribution: This article brings together literature on neoliberalism in the academy with that on university rankings. It argues that we can only understand the hold that the international rankings industry has by seeing the alignment between the rankings’ methodologies and aims on the one hand and the incursion of a neoliberal ideology across the higher education sector on the other.

Keywords: rankings; neoliberalism; pseudoscience; composite indexing; metrification.

Introduction

A great many academic and news articles have pointed out the many shortcomings of global rankings. However, universities continue to be complicit with this industry by: (1) providing them with data, (2) drawing on the rankings in their marketing, (3) determining internal resource allocation along the lines of which activities are rewarded by the rankings rather than which serve their students and their context, and (4) by gaming the system to improve their placing.

Given how widespread and easily accessible the criticisms of rankings are, we need to understand why it is that the university sector fails to act on such critiques. This requires us to acknowledge the fact that events in the social world, such as rankings, rarely come into being as a result of a singular cause. We do not live in a laboratory where variables are controlled, and cause–effect relationships can be identified and measured. Rather, there are always myriad mechanisms at play with the causal potential to enable or constrain the emergence of social events. In other words, the wider conditions created by larger mechanisms have effects on the likelihood of an event emerging or not (Danermark et al. 2019).

Conditions that enable a social event such as rankings to emerge and to be sustained despite repeated concerns raised about them need to be in place. And simultaneously, conditions that might constrain such emergence need to be absent or side-lined. It is my argument that central to the emergence of rankings are both the enabling conditions of neoliberalism and the lack of constraining conditions of framing higher education as a common good. To understand the argument that neoliberalism has been key to how rankings have come to grip the imagination of the sector and to impact on so many of its activities, we need to first understand what neoliberalism is.

Defining neoliberalism

Many (such as Aalbers 2013; Venugopal 2015) have argued that the term neoliberalism is problematic because it is used in so many ways: to denote an economic system, an ideology, a way of structuring society, and more. It is often used interchangeably with concepts such as hyper-capitalism, consumer culture, free market, and trickle-down economics; but while these often emerge in neoliberal conditions, they are not a full explanation of what neoliberalism is.

Though neoliberalism was first mooted as an economic policy some decades before, it was only through the work of academics such as Friedman, Stigler, von Mises, and Hayek, in the second half of the 20th century, that the ideas really gained traction. And when Reagan and Thatcher began their free market experiment in the 1980s, the underpinning ideology took hold at a global scale. Since then, it has morphed from economic policy pertaining to the privatisation of social structures, the axing of much state regulation, the reduction of taxation on the wealthy, and cuts in spending on social support programmes, to become an ideology that extends to all aspects of human behaviour and to our relationship with the environment.

I have argued elsewhere that neoliberalism can be understood as the financialisation of everything we do, say, think, and produce such that ‘value’ becomes allocated explicitly through monetisation (Boughey & McKenna 2021; McKenna 2021, 2022a, 2022b). Here I want to extend that conversation a bit more in relation to higher education. Sayer (2015) suggests that neoliberalism can be understood as a set of three characteristic beliefs: (1) markets are the best form of organisation; (2) power and status should be allocated along lines of wealth, even though wealthy people do little to contribute to a wider social good; and (3) that people are best conceived of as consumers rather than as part of an interdependent society. I briefly reflect on how each of these characteristics manifest in the neoliberal academy.

Markets are the best form of organisation

This is the belief that the most efficient way to allocate resources, expend energy, and care for each other is in service of a market that is untethered from restrictions and that can therefore be focused on profit and progress. This belief has contributed to the conditions of competition in the university sector, with each university positioned as an individual business fighting for its market share. While it would be ingenuous to suggest that universities can act without concern for the bottom line, the extent to which they are structured primarily as a business has had numerous negative effects on the academic project. Gumport (2000:67) argues that ‘the dominant legitimating idea of public higher education has changed from higher education as a social institution to higher education as an industry’. One example of this is the casualisation of academic staff. This pedagogically problematic but financially expedient process is starkly evident in South Africa where contract staff now make up 62% of all academics (Council on Higher Education 2023).

Power should be accorded to those with wealth

Neoliberalism disregards some markers of status such as expertise, experience, or age in favour of allocation of power along lines of wealth. This happens even as the wealthy are increasingly accorded ‘earnings’ based on their possession of financial assets rather than their enterprise or endeavours (Sayer 2015). Those universities with the greatest status inevitably have the greatest wealth, accrued from investments, properties, and donations. As Hazelkorn (ed. 2017:10) indicates, in a neoliberal context, concentrations of wealth in the higher education sector reinsert ‘hierarchical differentiation and social stratification.’

Human activity is best understood through the lens of economic value

The higher education implications of a focus on economic value are many, including that the creation of knowledge is increasingly framed as a commodity in the form of publications or patents, rather than as contributing to society, sustaining the planet, or building a field. What is valued is thus increasingly that which can be exchanged for more goods (such as higher salaries and promotions) rather than that which is useful (such as community service, environmental protections, and building human understanding). Innovation towards technological and financial progress is accorded greater value than activities that cannot be readily converted into monetary gain. Related to this, the role of the university is largely conceptualised as training workers for industry. Students, in this model, are both consumers of what the university is selling and simultaneously commodities whose skilled labour will be credentialed by the institution. While it would be naïve to suggest that higher education should not prepare students for employment, the narrowing of focus to this outcome arguably strips higher education’s potential to nurture critical citizens, to foster a responsibility to the common good, or to create knowledge that serves people and the planet.

Neoliberalism as a conditioning mechanism

Neoliberalism thus functions not only as an economic frame but also as a way of understanding the social and environmental world. It is premised on the idea of individual gain and competition and on the idea that success pertains to financial wealth rather than social connection. Neoliberalism, conceptualised as the above three characteristics, is evident at a global scale, and higher education is not immune to its pernicious effects. Indeed, Peters (2019) argues that higher education is no longer an instrument of social policy so much as an integral part of the larger knowledge capitalism.

Despite the pervasive nature of neoliberalism as an ideology, with specific economic, political, and social aspects, it cannot be seen to simply ‘cause’ rankings in the higher education sector. To suggest that neoliberalism ‘causes’ rankings would be as guilty of flattening complex social phenomena (Danermark et al. 2002; Luckett 2007; Luckett & Luckett 2009) as I will shortly argue is at the heart of methods used by the ranking industry itself. Rather, taking a realist view, rankings can be seen to have emerged within the enabling conditions of neoliberalism. The emergence of rankings required a great many pieces of the puzzle to be in place – from a structural and cultural milieu that values individualism and competition and positions higher education as a marketplace, through to the work of key agents, such as the CEOs of the rankings companies and the vice-chancellors and provosts of universities who have embraced the rankings for personal and institutional gain, without much by way of reflection or critique. All of these come together to enable the emergence and uptake of rankings.

Importantly, the emergence of rankings also required the absence or minimising of constraining conditions, such as an ideology of the university as a common good, or the belief that knowledge (both its creation and dissemination) should serve the environment and all of society (Ashwin 2020; Connell 2022).

Wilbers and Brankovic (2023:746) argue that the ‘casual attribution’ of rankings to ‘unspecific phenomena such as neoliberalism’ is short sighted. I would temper this claim on two points. Firstly, the realist concept of emergence is useful in coming to understand that neoliberal conditions enabled rather than caused the uptake of the major rankings industry, and that various other conditions also needed to be in effect for this to happen.

Secondly, Wilbers and Brankovic’s argument that rankings are not ‘caused’ by neoliberalism is on the basis that rankings cannot be explained by increased managerialism, thereby seeming to conflate neoliberalism with increasing top-down management. But as Hammarfelt, De Rijcke and Wouters (2017:392) explain, rankings emerge from more than the increasingly top-down management as the ‘practice of ranking ties in with deeply engrained cultural repertoires around competition and performance’. Indeed, neoliberalism is pervasive in a great many actions by actors who are not involved in management at all. Sadly, all who work and study in universities are complicit in the ways in which the academy has become neoliberal and legitimates neoliberal norms and values.

Understanding how neoliberalism has a grip on the academic project is central to understanding the resilience of rankings in the face of critique.

Critiques of rankings

Alongside the big three ranking systems produced by Quacquarelli Symonds, Times Higher Education, and Shanghai Ranking Consultancy (Academic Ranking of World Universities [ARWU]) are more than 40 other ranking systems and sub-systems. These companies enjoy massive profits with most of their income deriving not from the rankings as such but from their re-selling of the data that universities provide them for free (Fonn 2024), often selling data back to the same universities that provided it, but also to governments and corporations. Rankings are arguably mainly a vehicle by which to collect and then sell data (Usher 2022; Usher & Savino 2006). The many criticisms about rankings broadly relate to two issues – their problematic methodologies and their decontextualised nature, which serves a neo-colonial agenda.

Criticisms based on methodology

The multi-billion-dollar university rankings industry uses an array of methods to produce their lists of institutions. But while they vary in what they measure and how they measure it, they have several issues in common, all of which have been critiqued in the literature.

Lack of transparency

Almost all ranking companies refuse to indicate the precise details of their formulae (Holmes 2024b). The Assistant Director-General for Education at United Nations Educational, Scientific and Cultural Organization (UNESCO) has argued that ranking companies ‘should make perfectly clear what criteria they are using to devise them, how they have weighted these criteria, and why they made these choices’ (Marope, Wells & Hazelkorn 2013). Despite repeated calls of this nature from various stakeholders, the specifics of what is measured and how it is weighted are kept mysterious. Saisana, d’Hombres and Saltelli (2011:175) point out that rankings ‘reflect the perspectives of their developers and do not necessarily meet the practical needs of students or of higher education policymakers’ and this is especially the case given the secrecy with which they determine their lists.

This lack of transparency is enabled by the neoliberal notion that industries, such as rankings, will best flourish with as little regulation as possible, where ‘flourishing’ pertains to profit as an uninterrogated goal. But given the influence that rankings hold, there is very good reason for transparency to be demanded. Rankings shape the public’s understanding of higher education. After all, academics:

[A]ctively respond to the expectations of the academic status market, which have largely been shaped by the World University Rankings … and students, faculty members and funders turn to rankings as a lazy proxy for quality, no matter the flaws. (Gadd 2020:523)

Proxy metrics for complex social processes

Rankings purport to measure quality. In some cases, this is the quality of the research undertaken by the university, and in others it is the quality of the institution as a whole. But because the quality of almost any complex social process cannot be summarised into a simple metric, proxy measures are used instead. What is added together to determine the ranking of universities is not a measure of quality but rather a proxy measure, some of which are decidedly distant from the events and experiences they purport to represent.

For example, many rankings take peer evaluations of quality into account. But these are more likely an indication of the institution’s status than indicative of the quality of their teaching, of which few peers would be sufficiently aware to make a judgement. Because these reputation lists are lengthy or open-ended, they are more likely to capture a listing of which institutions are well-known than what is known about them.

This example extends to the other metrics too, as there really are no objective metrics (ed. Hazelkorn 2017). Criteria such as student selection and research output, for example, are more a reflection on the elite status of an institution than what that institution does (Marginson & Van der Wende 2006). Teaching quality, as flattened to such metrics as the numbers of student applications and rates of rejection, entirely overlooks the idea that a higher education is about providing opportunities for a transformative relationship to knowledge (Ashwin 2020) whereby teaching might have societal impact and nurture a critical citizenry.

Furthermore, the proxy measures for the same criterion, such as research, vary from ranking system to system, indicating how subjective this process is (Olcay & Bulu 2017).

The ARWU includes the following:

  • the number of alumni and staff who win Nobel Prizes and Fields Medals;
  • the number of highly cited researchers in 21 broad subject categories;
  • the number of papers published in Nature and Science;
  • the number of papers indexed in the Science Citation Index-Expanded and Social Sciences Citation Index;
  • the per capita academic performance of an institution.

Research-focused metrics make up 90% of the ARWU scoring. There is no justification for why Nature and Science should be privileged over other journals nor why the specific 21 subject categories are selected, nor why ‘per capita academic performance’ excludes the impact of research on the common good. Most amusingly, these metrics led to a public spat between the former University of Berlin, the Freie Universität, and the Humboldt Universität, as to which could claim the Nobel prizes of Albert Einstein and others (Jöns & Hoyler 2013).

Because there are now so many different ranking systems using such an array of proxy metrics and combining them in different ways, it has become possible to simply select the one in which the particular institution looks the best and foreground that in all their marketing. Increasingly, it does not matter what the numbers mean, it matters what people think they mean. As Wilbers and Brankovic (2023) argue, the emergence of the global rankings industry was in part thanks to the normalisation of the idea that it is possible to quantify and then thereby rank the complicated activities of a university.

Composite indexing

It is not just that the metrics that are collected are often poor proxies for social activities, but also that they are added together despite their often having no relation to one another (Galleli et al. 2022). All the ranking systems work by adding together the various scores in some fashion, but these scores are proxies for activities as diverse as teaching quality, good governance, research productivity, student experience, and staff satisfaction. There is little interrogation about the lack of relationship or potential overlap between these proxy metrics. Adding apples to oranges has never helped us to understand either very well.

Subjective weighting

To the methodological problems of the metrics being proxies and then being added to other unrelated proxies comes the issue of how to weight each metric. Should publications be weighted as 10% or 20% of the final score? Change the weighting of any metric and the entire list rearranges itself. The extent to which weighting variations implemented by the systems affects an institution’s position is largely felt by middle and low ranked universities (Pinar, Milla & Stengos 2019), and this is one reason why it is particularly dangerous for those universities that are focused on climbing the ranks to invest too heavily in just one or two metrics (Holmes 2024b).

The problematic ranking methodology leads to statistical inferences that are ‘unsound’ at both institutional and national level (Saisana et al. 2011). Given that the four criticisms discussed earlier provide a clear indication of the unscientific nature of the rankings, the question remains: why would any organisation committed to scientific knowledge creation participate in this process? To understand this conundrum, we must see the uptake of rankings within the wider social context in which it occurred.

But the methods used are also criticised on other, albeit related, grounds. Many have argued that rankings are neocolonial in nature in that they reinsert a hierarchy of power and status along historical lines.

Criticisms based on neocolonial nature of rankings
Rankings decontextualise universities

Rankings do not take history or its impact on current context into account. They do not consider how a particular university came to be and what its localised needs and aspirations might be. The indexing assumes a level playing field (Baltaru, Manac & Ivan 2022) despite ample research that shows that the colonial project included the intentional dismantling of local knowledge projects in the Global South (Heleta 2016; Mbembe 2016) and that further interference in the local academic project such as through the World Bank’s Structural Adjustment Programme continued to undermine the higher education sector years after independence (Nampota 2016).

A university’s position on various rankings is arguably not only a reflection of that history but also a means of reasserting it. In this way, the symbolic value of rankings is unrelated to the actual reality of what universities are about (Kehm 2014).

Rankings assume aspiration to homogenous form and function

Rankings are a blunt instrument – a one-size-fits-all. Almost all of them emphasise research performance, even those purporting to reflect the quality of an institution as a whole. As Jöns and Hoyler (2013) indicate:

[G]lobal rankings reflect a scalar shift in the geopolitics and geoeconomics of higher education from the national to the global that prioritizes academic practices and discourses conducted in particular places and fields of research. (p. 45)

The issue is not just that this ‘coercive isomorphism’ (Kehm 2014) is reflected by what is valued but that this leads to a narrowing of institutional logics around resource allocation and incentives.

Gadd (2020) reports that the International Network of Research Management Societies (INORMS), working group on research evaluation with members from a dozen countries, found that the:

[R]ankings with the largest audiences [ARWU, QS World University Ranking, THE WUR and US News & World Report global ranking] were found most wanting, particularly in terms of “measuring what matters” and “rigour”. None of these “flagship” rankings considered open access, equality, diversity, sustainability or other society-focused agendas. None allows users to weigh indicators to reflect a university’s mission. Yet all claim to identify the world’s best universities. (p. 523)

The glaring questions here are ‘best for whom?’ and ‘best for what?’

Rankings fail to consider the extent to which universities might seek to serve distinctive local needs precisely because they are premised on the fantasy of a globalised, corporatised higher education sector in which each institution is pitted against its market competitors;

Much clearer is the role of today’s universities in legitimating inequalities. No university president opens their mouth in public without the word ‘excellence’ floating out. The lightning-fast embrace of shonky league tables is a sign of the ideological work being done. (Connell 2022:104)

Global North, English language

Closely related to the previous two critiques is the issue of what forms of knowledge and knowing are excluded from global rankings. Given the ways in which the academic publishing industry reinforces knowledge dissemination in English and published by the larger Global North publishing houses, it is perhaps not surprising that many have suggested that what is legitimated as research output is in fact a very particular slice of knowledge dissemination at the cost of recognition of publications in other languages and parts of the world and of other forms of knowledge dissemination (Jöns & Hoyler 2013).

The media is adept at citing global rankings as if they were indeed representative of local quality and it is rare that the press engages with their ‘questionable legitimacy in a global South context’ (Shahjahan et al. 2017). Jansen (2018) suggests that though some universities claim bragging rights on the basis of rankings, what these systems really do is remind us of the inequities inherent in the global system of knowledge production.

The power of rankings

Rankings are incredibly pervasive and influential. Unfortunately, most of the many academics researching and critiquing the problems embedded in them do so in specialist journal articles unlikely to be read by the general public. And rankings make good news stories, and so they continue to capture the minds of a great many stakeholders. Unfortunately, they also have an impact on what happens within universities. For example, Rhein and Nanni (2023) argue that many Asian universities, including in Thailand where their research is based, have shifted away from a concern with teaching quality to a concern with research output. As they say, these behaviours are ‘predictable and rational adaptations to the game being imposed upon them’ (Rhein & Nanni 2023:55). Adam (2024:56) shows that rankings are very influential in Canada as they are ‘integral to … universities’ strategic positioning, legitimacy managing, and revenue-generating efforts’. Ishikawa (2009) shows how the rise of rankings in Japan created an image of global elite universities in the country at odds with many cherished academic traditions.

Koenings, Di Meo and Uebelmesser (2020) argue from Germany that rankings have an influence on institutional choice with a particularly strong influence on international students. Shahjahan et al. (2017) show how Denmark and India have directly incorporated rankings in national policies. Taiwan’s Five Year – Five Billion programme is aimed at improving national showings on rankings, and China and Russia have both targeted rankings recommending various shifts in higher education practices to this end, for example, incentivising publication in English. Countries such as Denmark and the Netherlands have even used the graduating university’s ranking to allocate points to potential immigrants.

Baltaru et al. (2022) demonstrate that elite universities are unaffected financially by small shifts in rankings, but that all other universities enjoy or suffer financial consequences, and they argue that this is a social justice issue. As they indicate, ‘Market competition appears to have reinforced hierarchies rather than alleviating them’ (Baltaru et al. 2022:2331). When UK universities’ ranks worsen, they have been found to receive a small but statistically significant reduction in both number of applications and average school leaving score of applicants (Broecke 2015). In some cases, the academic ‘arms race’ of rankings has even damaged institutions to the point of bankruptcy (Kehm 2014:107). But change is afoot.

Universities such as Colombia University (US), Utrecht University (the Netherlands), Zurich University (Switzerland), and Rhodes University (South Africa) and several universities in India have now opted out (Holmes 2024b). Individual schools have also withdrawn from discipline-specific ranking systems such as the 17 US medical schools and law schools (including Yale and Harvard), with the dean of Yale Law School, Heather Gerkin stating that, ‘We have reached a point where the rankings process is undermining the core commitments of the legal profession’ (Harris 2022).

When the Universiti Sains Malaysia withdrew, their vice-chancellor, Tan Sri Dzuklifi Abdul Raza, stated:

A university has its own personality, vision, and uniqueness … Diversity is wealth, and the more unique is the composition of its diversity, the better it will be for the university. It is not a factory, which produces a uniform lifeless being… (Tan & Goh 2014:498)

And he went on to argue that local universities might end up playing a catch-up game in their attempts to improve their position in the rankings, and:

[L]ike most catch-up games, by the time we are about to do so, the benchmark will move as the rules are changed by the game-setter. So, there is no end to this!. (Tan & Goh 2014:498)

Unfortunately, the rankings industry continues to include universities that refuse to participate by relying on incomplete publicly available data and they refuse to indicate to the public that their rankings are based on partial data sets. No university can avoid some impact of rankings, even those that have taken the ethical stand not to participate (Kehm 2014).

There are thus overwhelming reasons why rankings are unscientific and therefore extremely problematic to be taken up in institutions tasked with creating and disseminating knowledge. Furthermore, the overview offered here focused on the methodologies of these rankings and did not engage with the many other concerns universities should have, including, for example, the problematic business model used by the industry and the extent to which some universities have gamed the system through buying researcher affiliations, falsifying data, and interpreting some of the criteria rather loosely in their data reporting (Ansede 2023; Calderon 2020; Corricello & Myles 2021; Kutner 2014; SIRIS Academic 2023).

The puzzle as to how rankings have taken hold in the way they have needs to be understood within the wider neoliberal turn in our universities as outlined at the start. As explained, part of the neoliberal turn involves the metrification of human endeavours whereby what is valued is that which can be counted towards status and power. ‘In a neoliberal system, the emphasis shifts to the aspects of the university that can be “counted”, for example, profit, efficiency, and rankings’ (Knoetze 2024:1678). Thus, it is unsurprising that the rise of the rankings happened as part of a wider trend in the Social Sciences towards using metrics to describe and understand complex social phenomena. This meant that there was a ‘growing interest among scholars in measuring things like output productivity or prestige’ (Wilbers & Brankovic 2023). As Hammarfelt et al. (2017:392) explain, Social Science research increasingly legitimated numbers as data sets in place of attempts to capture and analyse messy human realities. They go further to indicate the importance of the sociopolitical context of psychology at the time in which statistics were being called upon largely through the eugenics movement to counter ‘a perceived decline in great men’. Over time, this concern for the elite individual characterised by a particular breeding and upbringing spread to a concern for the university that demonstrated excellent, elite scientific values. As Peters (2019) argues, rankings are part of larger moves towards performativity and the dominance of technoscience. He suggests that the result is a collective anxiety that does damage to both individual institutions and to the sector as a whole.

What is to be done?

Academics who take the potential for the university to be a common good in service of people and the planet seriously, need to collectively reject rankings. It is only through collective action that these perverse systems can be toppled. This has become more possible as the number of high-profile institutions rejecting the rankings increases.

A larger issue is the extent to which neoliberal conditions enable universities to act in the ways that they do, including and going beyond systems such as rankings. These include the metrification of the student experience; the use of generative artificial intelligence (AI) in knowledge creation, dissemination, and learning; the commodification of publications; the surveillance of students; and more.

Collectively calling out institutions that participate in unscientific rankings is thus part of a larger project. There are a great many battles to be fought in imagining the university as a common good. After all, ‘[r]eproducing privileged elites is not a legitimate use of social resources’ (Connell 2022:171). Bringing about change will require all who work and study in the academy to reflect on their understanding of the purposes of higher education for society and the environment at large and then to bring those purposes to bear in decision making about how we spend our time and our resources.

Conclusion

In demonstrating some of the many critiques of the ranking industry, I have argued that we need to reflect on how it is that such a problematic process has captured the higher education sector. In an era where there is so little trust in science, why do universities actively participate in something so patently unscientific? To answer this question, we need to look at the wider conditions in which rankings emerged.

Rankings would not have emerged in the ways that they have and taken hold, despite repeated critiques from researchers, if it were not for the extent to which we have commodified knowledge, positioned students as customers, and positioned universities as businesses competing against each other.

As Rhein and Nanni (2023:63) argue, rankings are ‘an expensive game that cannot be won’. But as the ‘age of deference to global rankings’ (Holmes 2024a) comes to an end, we need to focus on what conditions would be better than neoliberalism. And for that we must answer a new question: what would conditions be like if the higher education sector committed itself to being a common good and directed its resources to that end?

Acknowledgements

The author would like to thank Shiloh Marsh for her role as a research assistant and critical reader.

Competing interests

The author declares that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Author’s contributions

S.M is the sole author of this research article.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the author and are the product of professional research. The article does not necessarily reflect the official policy or position of any affiliated institution, funder, agency, or that of the publisher. The author is responsible for this article’s results, findings, and content.

References

Aalbers, M.B., 2013, ‘Neoliberalism is dead … long live neoliberalism!’, International Journal of Urban and Regional Research 37(3), 1083–1090. https://doi.org/10.1111/1468-2427.12065

Adam, E., 2024, ‘A reappraisal of global university rankings’ influence in Canada: A senior university leaders’ perspective’, Journal of Further and Higher Education 48(1), 56–69. https://doi.org/10.1080/0309877X.2023.2253430

Ansede, M., 2023, ‘Saudi Arabia pays Spanish scientists to pump up global university rankings’, EL PAÍS English, viewed 22 February 2024, from https://english.elpais.com/science-tech/2023-04-18/saudi-arabia-pays-spanish-scientists-to-pump-up-global-university-rankings.html.

Ashwin, P., 2020, Transforming University education: A manifesto, Bloomsbury Academic, London.

Baltaru, R.-D., Manac, R.-D. & Ivan, M.-D., 2022, ‘Do rankings affect universities’ financial sustainability? – financial vulnerability to rankings and elite status as a positional good’, Studies in Higher Education 47(11), 2323–2335. https://doi.org/10.1080/03075079.2022.2061447

Boughey, C. & McKenna, S., 2021, Understanding Higher Education: Alternative Perspectives, African Minds, Cape Town.

Broecke, S., 2015, ‘University rankings: Do they matter in the UK?’, Education Economics 23(2), 137–161. https://doi.org/10.1080/09645292.2012.729328

Calderon, A., 2020, ‘New rankings results show how some are gaming the system’, University World News, viewed 22 February 2024, from https://www.universityworldnews.com/post.php?story=20200612104427336.

Connell, R., 2022, The good university: What universities actually do and why it’s time for radical change, Bloomsbury Academic, London.

Corricello, M. & Myles, K., 2021, ‘Gaming the rankings: Race to the top’, the Epic, viewed 22 February 2024, from https://lhsepic.com/11474/in-depth/11474/.

Council on Higher Education, 2023, Vital Stats: Public and Private Higher Education data 2021, Council on Higher Education, Pretoria.

Danermark, B., Ekström, E., Jakobsen, L. & Karlsson, J.Ch., 2002, ‘Explaining society, critical realism in the social sciences’, Acta Sociologica 45(3), 246–250. https://doi.org/10.1177/000169930204500313

Fonn, S., 2024, ‘University rankings are unscientific and bad for education: Experts point out the flaws’, The Conversation, viewed 22 February 2024, from http://theconversation.com/university-rankings-are-unscientific-and-bad-for-education-experts-point-out-the-flaws-223033.

Gadd, E., 2020, ‘University rankings need a rethink’, Nature 587(7835), 523–523. https://doi.org/10.1038/d41586-020-03312-2

Galleli, B., Teles, N.E.B., Santos, J.A.R.D., Freitas-Martins, M.S. & Hourneaux Junior, F., 2022, ‘Sustainability university rankings: A comparative analysis of UI green metric and the times higher education world university rankings’, International Journal of Sustainability in Higher Education 23(2), 404–425. https://doi.org/10.1108/IJSHE-12-2020-0475

Gumport, P.J., 2000, ‘Academic restructuring’, Higher Education 39(1), 67–91. https://doi.org/10.1023/A:1003859026301

Hammarfelt, B., De Rijcke, S. & Wouters, P., 2017, ‘From eminent men to excellent universities: University rankings as calculative devices’, Minerva 55(4), 391–411. https://doi.org/10.1007/s11024-017-9329-x

Harris, A., 2022, ‘Why Yale Law school left the U.S. News & World Report Rankings’, The Atlantic, viewed 22 February 2024, from https://www.theatlantic.com/ideas/archive/2022/12/us-news-world-report-college-rankings-yale-law/672533/.

Hazelkorn, E. (ed.), 2017, Global rankings and the geopolitics of higher education: Understanding the influence and impact of rankings on higher education, policy and society, Routledge, Oxford.

Heleta, S., 2016, ‘Decolonisation of higher education: Dismantling epistemic violence and Eurocentrism in South Africa’, Transformation in Higher Education 1(1), a9. https://doi.org/10.4102/the.v1i1.9

Holmes, R., 2024a, ‘‘THE’ rankings: What happens to universities that leave?’, University World News, viewed 14 May 2024, from https://www.universityworldnews.com/post.php?story=20240423081048420.

Holmes, R., 2024b, ‘Global rankings: The age of deference is coming to an end’, University World News, viewed 14 May 2024, from https://www.universityworldnews.com/post.php?story=20240202121114196.

Ishikawa, M., 2009, ‘University rankings, global models, and emerging hegemony: Critical analysis from Japan’, Journal of Studies in International Education 13(2), 159–173. https://doi.org/10.1177/1028315308330853

Jansen, J., 2018, ‘Rankings not whole story’, Herald Live, viewed 22 February 2024, from http://www.heraldlive.co.za/opinion/2018/02/15/jonathanjansen-rankings-not-whole-story/.

Jöns, H. & Hoyler, M., 2013, ‘Global geographies of higher education: The perspective of world university rankings’, Geoforum 46, 45–59. https://doi.org/10.1016/j.geoforum.2012.12.014

Kehm, B.M., 2014, ‘Global University rankings – Impacts and unintended side effects’, European Journal of Education 49(1), 102–112. https://doi.org/10.1111/ejed.12064

Knoetze, R., 2024, ‘Cultivating criticality in a neoliberal system: A case study of an English literature curriculum at a mega distance university’, Higher Education 87(6), 1677–1692. https://doi.org/10.1007/s10734-023-01084-y

Koenings, F., Di Meo, G. & Uebelmesser, S., 2020, ‘University rankings as information source: Do they play a different role for domestic and international students?’, Applied Economics 52(59), 6432–6447. https://doi.org/10.1080/00036846.2020.1795075

Kutner, M., 2014, ‘How Northeastern University gamed the college rankings’, Boston Magazine, viewed 22 February 2024, from https://www.bostonmagazine.com/news/2014/08/26/how-northeastern-gamed-the-college-rankings/.

Luckett, K., 2007, ‘Methodology matters: Possible methods to improve quality’, Perspectives in Education 25(3), 1–11.

Luckett, K. & Luckett, T., 2009, ‘The development of agency in first generation learners in higher education: A social realist analysis’, Teaching in Higher Education 14(5), 469–481. https://doi.org/10.1080/13562510903186618

Marginson, S. & Wende, M.V.D., 2007, Globalisation and higher education, OECD Education Working Papers, No. 8, OECD Education, Paris.

Marope, P.T.M., Wells, P.J. & Hazelkorn, E., 2013, Rankings and accountability in higher education: Uses and misuses, UNESCO Publishing, Paris.

Mbembe, A.J., 2016, ‘Decolonizing the university: New directions’, Arts and Humanities in Higher Education 15(1), 29–45. https://doi.org/10.1177/1474022215618513

McKenna, S., 2021, ‘The politics of postgraduate education: Supervising in a troubled world’, in P. Rule, E. Bitzer & L. Frick (eds.), The global scholar: Implications for postgraduate studies and supervision, pp. 97–112, African Sun Media, Stellenbosch.

McKenna, S., 2022a, ‘Neoliberalism’s conditioning effects on the university and the example of proctoring during COVID-19 and since’, Journal of Critical Realism 21(5), 502–515. https://doi.org/10.1080/14767430.2022.2100612

McKenna, S., 2022b, ‘Plagiarism and the commodification of knowledge’, Higher Education 84, 1283–1298. https://doi.org/10.1007/s10734-022-00926-5

Nampota, T., 2016, Emergent governance practices in the University of Malawi following reform implementation from 1997 to 2013, PhD thesis, Centre for Higher Education Research, Teaching and Learning, Rhodes University.

Olcay, G.A. & Bulu, M., 2017, ‘Is measuring the knowledge creation of universities possible?: A review of university rankings’, Technological Forecasting and Social Change 123, 153–160. https://doi.org/10.1016/j.techfore.2016.03.029

Peters, M.A., 2019, ‘Global university rankings: Metrics, performance, governance’, Educational Philosophy and Theory 51(1), 5–13. https://doi.org/10.1080/00131857.2017.1381472

Pinar, M., Milla, J. & Stengos, T., 2019, ‘Sensitivity of university rankings: Implications of stochastic dominance efficiency analysis’, Education Economics 27(1), 75–92. https://doi.org/10.1080/09645292.2018.1512560

Rhein, D. & Nanni, A., 2023, ‘The impact of global university rankings on universities in Thailand: Don’t hate the player, hate the game’, Globalisation, Societies and Education 21(1), 55–65. https://doi.org/10.1080/14767724.2021.2016375

Saisana, M., Hombres, B.d’ & Saltelli, A., 2011, ‘Rickety numbers: Volatility of university rankings and policy implications’, Research Policy 40(1), 165–177. https://doi.org/10.1016/j.respol.2010.09.003

Sayer, A., 2015, Why we can’t afford the rich, rev. edn., Policy Press, Bristol.

Shahjahan, R.A., Blanco Ramirez, G. & Andreotti, V.D.O., 2017, ‘Attempting to imagine the unimaginable: A decolonial reading of global university rankings’, Comparative Education Review 61(S1), S51–S73. https://doi.org/10.1086/690457

SIRIS Academic, 2023, The affiliation game between Spanish and Saudi Arabian higher education & research institutions, Report, SIRIS Academic, Barcelona.

Tan, Y.S. & Goh, S.K., 2014, ‘International students, academic publications and world university rankings: The impact of globalisation and responses of a Malaysian public university’, Higher Education 68(4), 489–502. https://doi.org/10.1007/s10734-014-9724-2

Usher, A., 2022, ‘Two rankings stories you may have missed’, Higher Education Strategy Associates, viewed 22 February 2024, from https://higheredstrategy.com/two-rankings-stories-you-may-have-missed/.

Usher, A. & Savino, M., 2006, A world of difference: A global survey of university league tables, Educational Policy Institute, Toronto, ON.

Venugopal, R., 2015, ‘Neoliberalism as concept’, Economy and Society 44(2), 165–187. https://doi.org/10.1080/03085147.2015.1013356

Wilbers, S. & Brankovic, J., 2023, ‘The emergence of university rankings: A historical‑sociological account’, Higher Education 86(4), 733–750. https://doi.org/10.1007/s10734-021-00776-7



Crossref Citations

No related citations found.