Douglass (left) and Kochetkov advocate global university rankings reform. Photo: COURTESY OF JOHN AUBREY DOUGLASS and DMITRY KOCHETKOV
In the fiercely competitive landscape of global higher education, university rankings have evolved into a standard for measuring academic strength and guiding parents and students in their school selection. This numerically quantified evaluation system not only reshapes the development trajectories of universities but also profoundly influences the direction of educational policies across nations. As rankings become benchmarks for school choices and policy directives, critical reflections within academia on these ranking systems have sparked a reexamination of the intrinsic value of higher education. Addressing the origins, impacts, and flaws of global university rankings, CSST spoke with John Aubrey Douglass, a senior research fellow in Public Policy and Higher Education at the Center for Studies in Higher Education at the University of California, Berkeley, and Dmitry Kochetkov, an associate professor from the Department of Probability Theory and Cybersecurity at People’s Friendship University of Russia. They discussed the operational logic behind ranking systems, their challenges, and innovative approaches to evaluating higher education.
From quality appraisal to number game
CSST: Why do university rankings exist?
Douglass: The arrival of global rankings of universities is directly related to ministries in most countries seeking some way to evaluate the value and quality of their growing network of universities, often linked to a valid concern that their higher education institutions are not competitive with other top universities—mostly in the US and the UK. It also relates to a growing realization in the early 2000s that universities can and should be more significant drivers of national economic growth and socioeconomic mobility, and a sense that their universities are not doing enough for national development.
Kochetkov: Initially, university rankings were created as a marketing and bench-marking tool. In the early 21st century, countries in East Asia started implementing so-called “excellence initiatives” to achieve some kind of “world-class” status in research and higher education. As a result, there was a need to digitize the task of “how to catch up with Harvard.” This led to the widespread use of rankings in almost all aspects of higher education.
CSST: What impact do rankings have?
Kochetkov: Parents base their decisions about where to send their children to study on rankings. Governments and universities base their strategies on ranking indicators. Even professors and researchers, despite criticizing the rankings, base their career decisions on the ranking position of their university. Over twenty years, global university rankings have become a super-indicator. The problem with the idea that all aspects of a university’s activities can be summarized in a single figure is that it is simply not true.
Douglass: Initially, and even today, most of the focus of rankings and ministries, and in turn universities, is on limited and often flawed data on research productivity, including journal publications and citations indexes. This effort, in turn, drove ministries to announce goals of having more institutions among the top 100 or similar according to these rankings, and to seek reforms and policies that rewarded institutions and individual faculty for publications and citations, driving a massive increase in the number of journals, the proliferation of often meaningless citations and gaming, and a general decline in the actual meaning of these limited metrics for university and faculty productivity, as well as quality and innovation of work.
Flaws in the ranking indicator system
CSST: What factors determine global university rankings?
Douglass: Global university rankings are fixated on a narrow band of data and prestige scores that ignore much of the teaching and learning, research, and public service activities of the best universities. Citation indexes are biased toward the sciences and engineering, biased in which peer reviewed journals are included (largely US and European, and in the English language), and tilted to a select group of brand name universities who always rank high in surveys of prestige, the number of Noble laureates, and other markers of academic status.
It is not that these indicators are not useful and informative. But government ministries, and many university leaders, are placing too much faith in a paradigm that is not achievable or useful for the economic and socioeconomic mobility needs of the societies they serve. Ministries aim for a subset of their universities to inch up in various rankings by building accountability systems that influence the behavior of university leaders, ultimately influencing faculty. As a result, much of the current policy-making and funding by ministries responsible for higher education is fixated on the “world-class” universities and a ranking-focused mentality.
Kochetkov: We used to think of “rankings” as a general term, but they are actually quite different. There are rankings that use composite indicators (league tables), such as the Quacquarelli Symonds (QS), World University Rankings, and the Times Higher Education (THE) World University Rankings. Most of the methodological criticism relates to these indicators. No weighting scheme can be considered correct, as it is impossible to combine different things into one figure. It is like trying to sell 0.5 pairs of trousers, 0.2 shirt, and 0.3 coat at a clothing store - you wouldn’t be allowed into a good restaurant like that. Suppose the weights are identified by a group of experts—another group of experts would give a different answer. How can we determine who’s right? The idea itself is flawed. QS and THE use questionnaires, which create additional methodological issues, primarily related to response rates.
The CWTS Leiden Ranking and U-Multirank do not employ a composite indicator. Therefore, they can be used for benchmarking purposes, but it is important to recognize the limitations associated with their use. Another issue is that most rankings are based on proprietary data from sources such as Scopus and Web of Science, which means that the source data is not accessible to end users. However, CWTS has taken steps to address this issue by publishing open editions of the Leiden Ranking based on OpenAlex data. This is a significant step towards increasing transparency.
Fulfilling mission, reconstructing values
CSST: Without the rankings, how can we evaluate universities? How else can a university prove its excellence?
Douglass: The answers to your questions depend on the purpose of these rankings. One problem is that universities have a broad mission in teaching and learning, research and knowledge production, and public service and engagement, with outputs and societal impact that is not easily quantifiable, and for which reliable data is not available. One reason for the focus on research output is that, besides being a warped perception of what constitutes prestige and quality, there is internationally available data. The QS and the THE ranking attempt to incorporate variables like social impact, or the “internationalization” of universities (reduced to the number of international students), but these are often reliant on institutional reporting that is likely biased. Institutions are desperate to increase their rankings since they affect the number of financial resources they get from governments.
We need to enter a period where institutions gain greater autonomy and financial ability to foster a culture of self-improvement and evidence-based management. To this end, I developed the concept of the “New Flagship University” earlier. It offers a holistic and ecological vision of what makes the best and most influential national universities. This model provides a lens to view the past and future of Asia’s leading national universities, and outlines their broader purpose and goals.
An important tenet of the New Flagship model is that there are limits to the effectiveness of governmental and ministerial interventions in university operations. Most universities in Asia, and within Europe and elsewhere, have had weak internal cultures of accountability and management. Government-driven interventions and funding incentives have pushed much needed reform in much of the world. But ultimately, leading universities need to have greater control and build their own internal academic cultures through efforts focused on institutional self-improvement. The New Flagship model attempts to decipher, and provide examples of, pathways for building this culture and for internal accountability practices that bolster academic management.
Kochetkov: Every university is different in some ways and stronger in certain areas. For instance, it’s no secret that students at top research universities often complain about the quality of education. On the other hand, a smaller university can offer a unique learning experience or contribute significantly to solving local issues. The idea of measuring every university by the same standards inevitably leads to a lack of authenticity. Instead, the “More Than Our Ranks” initiative prioritizes uniqueness. This initiative partners with the Leiden Ranking, allowing users to not only compare quantitative indicators but also learn more about each university and its mission.
We must acknowledge that it is a challenging process to abandon rankings. It is also important to recognize that they carry significant marketing value. We are already seeing how these trends are slowing down in Europe due to the financial crisis in higher education. Besides, this is a complex cultural shift, and such transformations take time. The change begins with each of us.
In my study, “University rankings in the context of research evaluation: A state-of-the-art review,” I proposed starting with four steps: Stop evaluating academics based on university ranking indicators. Start rewarding the contributions of faculty and researchers in all areas of university activity. Stop constructing university strategies based on university rankings. Do not use ranking tiers in analytical reports for management decision-making; instead, focus on the actual contributions made by a university (scientific, educational, and societal). Stop evaluating universities based on ranking indicators. Every university has a unique mission, and only the fulfillment of this mission really matters. Stop using ranking information in national strategies and other kinds of ambitions. Only universities’ contributions to national and global goals should be considered.
Edited by LIU YUWEI