Is it possible to rank universities using fewer indicators? A study on five international university rankings

Date21 January 2019
DOIhttps://doi.org/10.1108/AJIM-05-2018-0118
Published date21 January 2019
Pages18-37
AuthorGüleda Doğan,Umut Al
Subject MatterLibrary & information science,Information behaviour & retrieval,Information & knowledge management,Information management & governance,Information management
Is it possible to rank universities
using fewer indicators?
A study on five international
university rankings
Güleda Doğan and Umut Al
Department of Information Management, Hacettepe University, Ankara, Turkey
Abstract
Purpose The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused
international university rankings (Academic Ranking of World Universities (ARWU), NTU, University
Ranking by Academic Performance (URAP), Quacquarelli Symonds (QS) and Round University Ranking
(RUR)) over years, and show the effect of similar indicators on overall rankings for 2015. The research
questions addressed in this study in accordance with these purposes are as follows: At what level are the
intra-indicators used in international university rankings similar? Is it possible to group intra-indicators
according to their similarities? What is the effect of similar intra-indicators on overall rankings?
Design/methodology/approach Indicator-based scores of all universities in five research-focused
international university rankings for all years they ranked form the data set of this study for the first and second
researchquestions. Theauthors used a multidimensionalscaling (MDS) andcosine similaritymeasure to analyze
similarity of indicators and to answer these two research questions. Indicator-based scores and overall ranking
scores for2015 are used as data and Spearman correlationtest is applied toanswer the third researchquestion.
Findings Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly
similar and that they can be grouped according to their similarities. The authors also examined the effect of
similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least
from the omitted similar indicators, which means it is possible for these two rankings to create very similar
overall ranking lists to the existing overall ranking using fewer indicators.
Research limitations/implications CWTS, Mapping Scientific Excellence, Nature Index, and SCImago
Institutions Rankings (until 2015) are not included in the scope of this paper, since they do not createoverall ranking
lists. Likewise, Times Higher Education, CWUR and US are not included because of not presenting indicator-based
scores. Required data were not accessible for QS for 2010 and 2011. Moreover, although QS ranks more than 700
universities, only first 400 universities in 20122015 rankings were able to be analyzed. Although QSsandRURs
data were analyzed in this study, it was statistically not possible to reach any conclusion for these two rankings.
Practical implications The results of this study may be considered mainly by ranking bodies, policy- and
decision-makers. The ranking bodies may use the results to review the indicators they use, to decide on which
indicators to use in their rankings, and to question if it is necessary to continue overall rankings. Policy- and
decision-makers may also benefit from the results of this study by thinking of giving up using overall ranking
results as an important input in their decisions and policies.
Originality/value This study is the first to use a MDS and cosine similarity measure for revealing the
similarity of indicators. Ranking data is skewed that require conducting nonparametric statistical analysis;
therefore, MDS is used. The study covers all ranking years and all universities in the ranking lists, and is
different from the similar studies in the literature that analyze data for shorter time intervals and top-ranked
universities in the ranking lists. It can be said that the similarity of intra-indicators for URAP, NTU and RUR
is analyzed for the first time in this study, based on the literature review.
Keywords Multidimensional scaling, International university rankings, Ranking indicators,
Redundant indicators, Research-focussed indicators, Similarity of intra-indicators
Paper type Research paper
Introduction
The early history of ranking universities has started in the USA and going back to 1870s.
Ranking universities became widespread in 1980s with the popular press in USA,
particularly US News and World ReportsBest Colleges (Stuart, 1995). Although numerous
Aslib Journal of Information
Management
Vol. 71 No. 1, 2019
pp. 18-37
© Emerald PublishingLimited
2050-3806
DOI 10.1108/AJIM-05-2018-0118
Received 1 June 2018
Revised 20 August 2018
Accepted 24 September 2018
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2050-3806.htm
This article is based on Güleda Doğans (2017) PhD dissertation and was supported in part by a research
grant from the Hacettepe University Scientific Research Projects Coordination Unit (SBB-2016-11378).
18
AJIM
71,1
national rankings were developed following Best Colleges (see http://ireg-observatory.org/
en/ranking-profile), the first international ranking of universities was done by Shanghai Jiao
Tong University in 2003, under the name Academic Ranking of World Universities
(ARWU). While the main aim of ARWU was to determine the position of Chinese
universities among other world universities and to identify their strengths and weaknesses
(ARWU, 2015a), ARWU attracted intense interest along with the discussions on the subject
(Liu et al., 2005; Van Raan, 2005a, b). However, these discussions did not prevent the rapid
increase in the number of rankings. Ranking Web of Universities (http://webometrics.info/)
was developed by Cybermetrics Lab, an initiative of the Spanish National Research Council,
in 2004, immediately after ARWUs first international university ranking list in 2003.
Another ranking that emerged in 2004 out of a partnership between Times Higher
Education (THE) and Quacquarelli Symonds Ltd (QS) was Times Higher Education
Supplement, followed by THE-QS World University Rankings, which then continued as two
different rankings by 2010: QS World University Rankings and THE World University
Rankings (Holmes, 2010, p. 91). The increase in the number of international university
rankings continued with uniRank, formerly named four International Colleges and
Universities (4icu.org), which ranks 12,358 colleges and universities in 200 countries
according to their web popularity (uniRank, 2017). Another international university ranking
is the NTU Ranking: Performance Ranking of Scientific Papers for World Universities
initiated by the Higher Education Evaluation and Accreditation Council of Taiwan in 2007
and was transferred to Taiwan National University in 2012 (NTU, 2017a). CWTS Leiden
Ranking (www.leidenranking.com/) was introduced by the Leiden University Center for
Science and Technology Studies (CWTS) in 2008 with a different approach to rankings.
CWTS Leiden Ranking provides indicator-based rankings, but not an overall ranking
weighting the indicators. SCImago Institutions Rankings (SIR) has been ranking institutions
in different sectors including universities by research performance, innovation and web
visibility since 2009 (SIR, 2017a, b). In 2010, three international rankings emerged:
University Ranking by Academic Performance (URAP), Round University Ranking (RUR),
and Universitas Indonesia (UI) GreenMetric University Ranking. UI GreenMetric University
Ranking focuses on green campuses and sustainability (UI GreenMetric, 2015). RUR ranks
universities according to 20 different indicators under the categories of research, education,
international diversity and financial sustainability (RUR, 2017). URAP (www.urapcenter.
org), developed by the Informatics Institute of the Middle East Technical University in
Turkey, ranks universities by academic performance and lists 2,000 universities. Starting in
2012, Youth Incorporated Global University Rankings has used indicators focusing
on the opportunities that universities provide to students (Youth Incorporated, 2015). The
Nature Index, which emerged in 2013, ranks by the number of articles calculated in three
different ways: the number of articles, the fractional number of articles, and the weighted
fractional number of articles. They update their ranking lists on a monthly basis (A guide
to the Nature Index, 2017; Nature Index, 2017). Mapping Scientific Excellence
(www.excellencemapping.net) emerged in 2013 and has a very different approach
than the abovementioned rankings. It is a web application that lists universities and
research-focused institutions in specific areas according to scientific performance with
visual representation. Mapping Scientific Excellence does not create ranking lists on an
annual basis. They create a new version by updating current rankings according to
five-year periods (20052009, 20062010, 20072011 and 20082012) (Bornmann et al., 2014,
p. 28; Whitcroft, 2013). U-Multirank, which emerged in 2014 and is funded by the European
Union, has introduced a multidimensional and user-focused approach to rank universities
internationally by categorizing them into one of five different groups from very goodto
weakaccording to selected indicators (Butler, 2010; U-Multirank, 2017a, b; Van Vught and
Ziegele, 2012). US News and World Report has begun to rank universities internationally
19
University
ranking
indicators

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT