Select Page

University rankings need a rethink

University rankings need a rethink

Rankings are a primary factor for students, faculty members, funders, and professors to prefer one institution to the other.

Researchers often point at the indicators used by the committees to judge them. The universities have been pushed to rethink how and when to use these publications to assess research and researchers.

The use of rankings to assess universities also needs to be reconsidered.  The league tables, published by the Academic Ranking of World Universities (ARWU) and the Times Higher Education World University Ranking (THE WUR) and others, influence eligibility for scholarships and other funds impact scholars and students deciding where to work and study. The institutions in different countries have different budgets provided by their governments, and these can influence the rankings.

Two years ago, the International Network of Research Management Societies (INORMS), a collective of research-management organizations, has started an initiative with members from different countries whose aim has been the need for fair-minded university rankings.

The publications on research management are full of critiques of rankings. Rankings are results of a methodology that might often use inappropriate indicators such as counting Nobel-prizewinning alumni as a plus for offering quality education. Some rating favour publications in English and institutions that did well in past rankings mean that older, wealthier universities in Europe and North America are often on the charts’ top. Rankings often overlook indicators such as teaching quality or societal impact.  

Despite the above facts, the rankings are a primary factor for students, faculty members, funders, and professors to prefer one institution to the other. The results are that the talent of some students is misplaced, it will have an effect on the income of the institutions, and eventually, it will lead to favoritism.

The group developed the criteria by checking the literature on research and asked for feedback through various community discussions open to academics, research professionals, and related groups. They ordered feedback into 20 principles involving good governance (such as the declaration of financial conflicts of interest), transparency ( methods, aims, and data), measuring what matters (in line with a university’s mission) and rigour (the indicators are a good proxy for what they claim to measure).

Then these principles were converted into a tool to assess rankings, qualitatively, and quantitatively. Six of the world’s highest-profile rankers were assessed by international specialists, and the rankers were invited to self-assess.

None of these well-known rankings with large audiences consider open access, equality, diversity, sustainability, or other society-focused agendas. None allows

users to consider indicators to reflect a university’s mission, but they try to produce the list of the world’s best universities.

Rankers might argue that the group’s principles were unrealistic and that simple metrics should be in place.  This initiative aimed to bring awareness to the researchers’ rankings and methods and bring more independent oversight to make the rankings fairer.

About The Author

Kristina Kolarikova

Kristina has been working in the international travel industry for the past 11 years. In the last 5 years, she is managing the operations of an international educational TO company. She's very knowledgeable about international education and keeps herself up-to-date with the upcoming changes that the sector is going through. She loves travelling, reading, hiking and writing for our EdMagazine.

Leave a Reply

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!

Pin It on Pinterest

Share This