Are Mendeley reader counts high enough for research evaluations when articles are published?

DOIhttps://doi.org/10.1108/AJIM-01-2017-0028
Pages174-183
Date20 March 2017
Published date20 March 2017
AuthorMike Thelwall
Subject MatterLibrary & information science,Information behaviour & retrieval,Information & knowledge management,Information management & governance,Information management
Are Mendeley reader counts high
enough for research evaluations
when articles are published?
Mike Thelwall
School of Mathematics and Computing, University of Wolverhampton,
Wolverhampton, UK
Abstract
Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic
publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research
evaluation purposes during the month when an article is first published.
Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus
citation counts for 104,520 articles from ten disciplines during the second half of 2016.
Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in
which they first appeared in Scopus. This is about ten times more than the average Scopus citation count.
Research limitations/implications Other disciplines may use Mendeley more or less than the ten
investigated here. The results are dependent on Scopuss indexing practices, and Mendeley reader counts can
be manipulated and have national and seniority biases.
Practical implications Mendeley reader counts during the month of publication are more powerful than
Scopus citations for comparing the average impacts of groups of documents but are not high enough to
differentiate between the impacts of typical individual articles.
Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts
from the publication month of an article.
Keywords Mendeley, Bibliometrics, Citation analysis, Altmetrics, Early impact, Mendeley readers
Paper type Research paper
Introduction
Academic research is evaluated for appointment, promotion, tenure, for university league
tables, for national research evaluation exercises and for self-reflection purposes. Some of
these use quantitative data or are supported by numerical evidence of impact. Citation
counts for refereed journal articles are a common source of this quantitative data, including
in the form of journal impact factors ( JIFs) and field normalised citation counts (Garfield,
2006; Waltman et al., 2011; Wilsdon et al., 2015). Citation counts are not suitable for helping
to evaluate new research because articles may take three years to attract a substantial
number of citations due to publication delays. For this reason, formal evaluations often use a
citation window of considerable length, such as three years (Wang, 2013), which excludes
newer articles from evaluations. This means that the most recent and, therefore, most
relevant research cannot be evaluated with the help of most citation-based indicators
because they cannot differentiate effectively between different levels of impact for
individual articles.
Two solutions to this problem are to use publishing journal JIFs (or journal rankings:
Kulczycki, 2017) as a proxy for citation impact or to use web-based early impact indicators.
JIFs can avoid citing article publication delays if it is accepted that the average impact of a
journal is an appropriate proxy for the impact of its articles (but see: Lozano et al., 2012; and
note also the time dimension: Larivière et al., 2008) and that JIFs are stable over time (which
is usually true: Thelwall and Fairclough, 2015). On this basis, say, the 2016 JIF of a journal
would be a reasonable indicator for the impact of articles published in that journal in 2016
even though the 2016 JIF calculations are based solely on the citations to articles
published in 2014 and 2015 (Garfield, 2006). A more fine-grained alternative is to exploit a
Aslib Journal of Information
Management
Vol. 69 No. 2, 2017
pp. 174-183
© Emerald PublishingLimited
2050-3806
DOI 10.1108/AJIM-01-2017-0028
Received 22 January 2017
Revised 28 February 2017
Accepted 6 March 2017
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2050-3806.htm
174
AJIM
69,2

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT