Expert judgments versus publication-based metrics: do the two methods produce identical results in measuring academic reputation?

DOIhttps://doi.org/10.1108/JD-02-2022-0039
Published date16 May 2022
Date16 May 2022
Pages127-143
Subject MatterLibrary & information science,Records management & preservation,Document management,Classification & cataloguing,Information behaviour & retrieval,Collection building & management,Scholarly communications/publishing,Information & knowledge management,Information management & governance,Information management,Information & communications technology,Internet
AuthorKaterina Guba,Angelika Tsivinskaya
Expert judgments versus
publication-based metrics:
do the two methods produce
identical results in measuring
academic reputation?
Katerina Guba and Angelika Tsivinskaya
European University at St Petersburg, Sankt-Peterburg, Russian Federation
Abstract
Purpose This study aims to assess the validity of citation metrics based on the disciplinary representative
survey.
Design/methodology/approach The present project compared citation rankings for individual scientists
with expert judgments collected through a survey of 818 Russian sociologists. The Russian Index of Science
Citation was used to construct the general population of 3,689 Russian sociologists, to whom the survey was
sent by email. The regression analyses of bibliometric indicators and peer review scores for 723 names of
scholars mentioned in the survey have been undertaken.
Findings Findings suggest that scientometric indicators predict with significant accuracy the names of the
most influential sociologists and those scholars who are not mentioned while they are less relevant for
prediction names which received moderate attention in the survey.
Originality/value This study contributes to the research on the validity of citation metrics by focusing on
scientometric indicators, not limited to traditional metrics but including non-standard publication metrics and
indicators of potential metric abuse. Besides, the study presents the national bibliometric data source that is
especiallyimportant for non-Westernhigher education systems,less presented in the Web of Science or Scopus.
Keywords Citation metrics, Academic reputation, Validity of citation rankings, Expert judgements
Paper type Research paper
Introduction
The interest in citation rankings was amplified by the relatively recentexpansion of quantitative
indicators for research assessment of individual academics, departments and disciplines. Many
countries have launched performance-based research funding systems, in which citation metrics
are integral (Hicks, 2012). In addition to the use of scientomentics at the national level, the
phenomenon of citizen bibliometricshas come into existencecitation metrics are used by
groups outside professional contexts, including scientists and administrators in everyday
evaluation of individuals (Hammarfelt and Rushforth, 2017). The reliance on quantitative
indicators of research productivity represents perhaps the most important recent transformation
of academic institutions (Auranen and Nieminen, 2010;Espeland and Sauder, 2016;Muller, 2018).
However, researchers question whether citations might be used to accurately assess the
intellectual contribution of the scientist; citations are not synonymous with quality, originality or
a high level of performance (Bornmann and Daniel, 2008;Aksnes et al., 2019;Tahamtan and
Bornmann, 2019). The issue of the validity of citation rankings has persisted on policy agendas,
as empirical attempts to demonstrate whether citation data constitute a valid means to measure
scientific performance have been growing.
During recent decades, researchers have analyzed how citation metrics correlate to peer
judgments with the assumption that correspondence indicates the validity of citation
To assess the
validity of
citation metrics
127
The paper has been prepared with the support of the Russian Science Foundation, No. 21-18-00519.
The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/0022-0418.htm
Received 12 February 2022
Revised 9 April 2022
Accepted 14 April 2022
Journal of Documentation
Vol. 79 No. 1, 2023
pp. 127-143
© Emerald Publishing Limited
0022-0418
DOI 10.1108/JD-02-2022-0039
indicators for evaluating research (So, 1998;Serenko and Dohan, 2011). Researchers conduct
academic reputation surveys, in which specialists in a discipline evaluate journals,
institutions, individuals or texts. Disciplinary surveys are considered a reliable approach
to measuring academic qualityin terms of recognition or impactdirectly, rather than by
proxy through citations. These surveys do not discriminate against book authors and
manipulations during a survey are limited. Thus, if citation metrics are correlated to survey
results, we obtain evidence that publication-based metrics can be applied to measure
scientific impact.
A review of various studies on the validity of citation metrics demonstrates that scant
research has been conducted examining their validity for individual scholars. While this can
be explained by the fact that individual scholars are seldom evaluated in the course of
national evaluations (So, 1998), they are nevertheless assessed in the contexts of faculty
hiring, tenure or promotion. Establishing validity using survey data is also less common due
to the complexity of organizing a large-scale disciplinary survey. Our research strives to solve
the classic problem, assessing the validity of citation metrics, by analyzing the
correspondence of various bibliometric indicators for individual scientists to expert
judgments collected through a survey of 3,689 Russian sociologists (818 responded). In
comparison with flaws in citation rankings for the social sciences, reputation surveys are
designed to be free of certain limitations. The surveys can measure reputation and scientific
contributions in their purest form, rather through such proxies as citations. In our analysis,
we employ regressions to assess the roles of standard and novel indicators.
In this study, Russian sociology serves as the extreme case. First, we chose sociology because
of its middle position in the spectrum of disciplines appropriate for bibliometrics, due to
publishing norms (van Raan, 1998). There is widespread criticism, especially from social
scientists and including sociologists, who defend the statement that quantitative indicators, are
unable to measure research merit in their fields (Glaeser, 2004;Najman and Hewitt, 2003).
Researchers argue that standard citation metrics fail to capture the complex and
multidimensional phenomenon of academic reputation. We suggest that it is worthwhile
exploring whether not only standard bibliometric, but also novel, indicators are appropriate for
measuring intellectual contribution in the social sciences. Second, severe flaws in academic
integrity in Russia are publicly acknowledged and widely discussed throughout the academic
community. The concerns regarding academic misconduct have resurfaced recently, with
Russias aggressive attempts to increase its share of global research output to 2.44% and create
world-class universities throughout the country (Moed et al., 2018). Russia is a prominent
illustration of the prediction that the wide use of citation metrics as an administrative tool results
in accusations that these metrics can be easily manipulated; as such evaluation systems are not
protected from gaming (Rijcke et al., 2016). Russian universities have faced difficulties in
increasing their numbers of international publications; these challenges have led the institutions
to actively employ questionable strategies, including publishing in predatory journals (Guskov
et al.,2018). Thus, even while the case of Russian sociology with its complicated history of metrics
abuse demonstrates the correspondence of citation indicators to peer judgments, we produce
further empirical evidence to inform whether bibliometrics provide valuable grounds to assess
some aspects of research quality.
In this study, we have been able to overcome three serious methodological limitations
evident in the previous studies. First, we organized the representative nationwide survey,
with an adequate response rate. Second, in gathering bibliometric indicators, we rely on
national sources of bibliometric data that are especially important for non-Western higher
education systems, because the research produced in these systems is not always indexed by
the Web of Science or Scopus (Mosbah-Natanson and Gingras, 2014). The Russian Index of
Science Citation is known for wide coverage of not only journal sources, but also various
kinds of academic literature, including books and their chapters; dissertations and policy
JD
79,1
128

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT