The construct validity of the h-index

Pages878-895
Date12 September 2016
Published date12 September 2016
DOIhttps://doi.org/10.1108/JD-10-2015-0127
AuthorCameron Stewart Barnes
Subject MatterLibrary & information science,Records management & preservation,Document management,Classification & cataloguing,Information behaviour & retrieval,Collection building & management,Scholarly communications/publishing,Information & knowledge management,Information management & governance,Information management,Information & communications technology,Internet
The construct validity
of the h-index
Cameron Stewart Barnes
University of New England, Armidale, Australia
Abstract
Purpose The purpose of this paper is to show how bibliometrics would benefit from a stronger
programme of construct validity.
Design/methodology/approach The value of the construct validity concept is demonstrated by
applying this approach to the evaluation of the h-index, a widely used metric.
Findings The paper demonstrates that the h-index comprehensively fails any test of construct
validity. In simple terms, the metric does not measure what it purports to measure. This conclusion
suggests that thecurrent popularity of the h-index as a topic for bibliometricresearch represents wasted
effort, which mighthave been avoided if researchers hadadopted the approach suggested in thispaper.
Research limitations/implications This study is based on the analysis of a single bibliometric
concept.
Practical implications The conclusion that the h-index fails any test in terms of construct validity
implies that the widespread use of this metric within the higher education sector as a management tool
represents poor practice, and almost certainly results in the misallocation of resources.
Social implications This paper suggests that the current enthusiasm for the h-index within the
higher education sector is misplaced. The implication is that universities, grant funding bodies and
faculty administrators should abandon the use of the h-index as a management tool. Such a change
would have a significant effect on current hiring, promotion and tenure practices within the sector,
as well as current attitudes towards the measurement of academic performance.
Originality/value The originality of the paper lies in the systematic application of the concept of
construct validity to bibliometric enquiry.
Keywords Measurement, Impact, Bibliometrics, h-index, Construct validity, Hirsch index
Paper type Conceptual paper
Introduction
Bibliometrics aims to apply objective, scientific methods to the analysis of citations.
Its practitioners frequently express the hope that their discipline will eventually be
recognized as a hardsocial science. In this context, it is surprising that most
bibliometricians pay so little attention to issues of construct validity. This attitude is in
contrast with the practice elsewhere in the social sciences, where researchers place
great weight on construct validation when designing new measures. This paper
examines how bibliometrics might benefit from a stronger programme of construct
validity. As evidence, it illustrates the value of the construct validity approach in the
evaluation of a well-known and widely used metric: the h-index.
What is the h-index?
The h-index is a comparative measure of an individuals research impact proposed by
physicist Jorge Hirsch. According to Hirsch (2005):
A scientist has index hif hof his or her Np papers have at least hcitations each and the other
(Nph) papers have hcitations each (p. 16569).
In laypersons terms, a researcher with ten published articles, each of which has
received at least ten citations, has a h-index of 10. From the beginning, Hirsch (2005)
Journal of Documentation
Vol. 72 No. 5, 2016
pp. 878-895
©Emerald Group Publishing Limited
0022-0418
DOI 10.1108/JD-10-2015-0127
Received 10 October 2015
Revised 2 April 2016
Accepted 3 April 2016
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0022-0418.htm
878
JDOC
72,5
intended his metric to be used as decision-making tool, expressing the hope that it may
provide a useful yardstick to compare different individuals competing for the same
resource when an important evaluation criterion is scientific achievement(p. 16572).
The influence of the h-index on bibliometrics
The h-index has been an extremely popular topic of bibliometric research over the last
decade. Enthusiasm for the h-index among many bibliometricians is so great that some
observers have even gone so far as to divide the research field into a pre and post
Hirsch period(Bartneck and Kokkelmans, 2011, p. 86). The h-index has been used in
hundreds of studies to measure the research output of individual scientists, research
groups, universities and even whole nations (e.g. Jacsó, 2009; Lazaridis, 2010; Prathap
and Gupta, 2009). The extension of the metric to topics such as the measurement of
journal impact (Schubert and Glänzel, 2007) was perhaps only a matter of time.
More striking has been the recent trend to apply the h-index to unexpected areas: from
the level of research interest in particular diseases and pathogens (McIntyre et al., 2011;
Sanni et al., 2013) to the popularity of YouTube channels (Hovden, 2013).
The h-index outside bibliometrics
The h-index is commonly used as decision-making tool in universities across the globe.
There are claims that it has become the most popular quantitative measure of a
researchers productivity and impact(Penner et al., 2013, p. 8). The metric is frequently
employed to determine the success or failure of grant proposals, the outcome of
applications for promotion, fellowship or tenure, and the even level of government
funding for institutions (Barnes, 2014). Although this trend has generated unease
among some observers (Burrows, 2012), the consensus regarding the h-index seems to
be Whether you or I like it or not, it is here to stay(Schreiber, 2014, p. 9).
Despite its increasing influence, there have been long concerns that the h-index has
received little serious analysis(Adler et al., 2008). With few exceptions, there has been
little effort to look more deeply at the metric and its construction. Efforts at validation
have been sparse, and have been largely restricted to attempts to show convergent
validity, the degree to which the h-index appears correlated to other measures
(Bornmann and Daniel, 2009; Bornmann et al., 2008).
The h-index zoo
The relative lack of interest in fundamental issues is most evident in the proliferation of
h-index variants. Like other citation-based measures of research impact, the h-index suffers
from a number of inherent limitations. Hirsch himself readily acknowledges this point
(Hirsch, 2005, 2007, 2010; Hirsch and Buela-Casal, 2014). These shortcomings include:
a built-in bias against early career researchers (Kelly and Jennions, 2006);
susceptibility to inflation through self-citation (Bartneck and Kokkelmans, 2011;
Burrell, 2007; Schreiber, 2007; Zhivotovsky and Krutovsky, 2008);
the absence of adjustments for multiple authorship (Burrell, 2007; Hirsch, 2010;
Schreiber, 2008);
the lack of any means of field-normalization (Alonso et al., 2009; Batista
et al., 2006); and
the dissimilar h-index scores for individuals generated by different citation
databases (Jacsó, 2008).
879
Construct
validity of the
h-index

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT