Pragmatic issues in calculating and comparing the quantity and quality of research through rating and ranking of researchers based on peer reviews and bibliometric indicators from Web of Science, Scopus and Google Scholar

DOIhttps://doi.org/10.1108/14684521011099432
Pages972-982
Date30 November 2010
Published date30 November 2010
AuthorPéter Jacsò
Subject MatterInformation & knowledge management,Library & information science
SAVVY SEARCHING
Pragmatic issues in calculating
and comparing the quantity and
quality of research through rating
and ranking of researchers based
on peer reviews and bibliometric
indicators from Web of Science,
Scopus and Google Scholar
Pe
´ter Jacso
`
University of Hawaii at Ma
¯noa, Honolulu, Hawaii, USA
Abstract
Purpose – The purpose of this paper is to analyse the findings of two recently published papers
(Norris and Oppenheim, 2003; and Li et al., 2010).
Design/methodology/approach – The findings were analysed from the practitioner’s perspective
about the procedures involved in calculating the indicator values and the ranks and ratings. This was
done with the purpose of playing the devil’s advocate, contemplating the reservations and arguments
of those who do not want to use metrics based on database searches.
Findings – One advantage of this project is that its results can be compared at least partially with the
findings of the three earlier RAEs (although its grade classes have changed), as well as with some of
the other ranking lists in library and information management areas.
Originality/value – Very importantly, the authors concluded that “it would be premature in the
extreme to suggest that citation-based indicators could be used as a cost-effective alternative to expert
judgments”. This is a strong, very realistic and fair statement. Even this recent project’s results are
very valuable in spite of the problems mentioned.
Keywords Research, Peerreview, Quality indicators
Paper type Research paper
Introduction
It is very clear that more and more administrators at the institutional and government
level will want to benefit from the widely claimed, often over-emphasised advantage
and unrealistic simplicity of using computer-generated bibliometric, scientometric and
informetric indicators in decisions related to tenure, promotion and grant applications,
or for getting league lists of competing journals, departments, institutions and
countries by scholarly publishing productivity and impact. Librarians and other
information professionals in academic and special libraries, research and teaching
faculty at universities offering master and doctoral degrees will be required to produce
mountains of indicators to evaluate the quality of their research well before they
recover from the potentially useful but very time-consuming processes involved in the
much in vogue strategic planning cycles.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1468-4527.htm
OIR
34,6
972
Online Information Review
Vol. 34 No. 6, 2010
pp. 972-982
qEmerald Group Publishing Limited
1468-4527
DOI 10.1108/14684521011099432

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT