Finding “good enough” metrics for the UK’s research excellence framework
Published date | 13 April 2015 |
Pages | 265-269 |
Date | 13 April 2015 |
DOI | https://doi.org/10.1108/OIR-01-2015-0021 |
Author | David Stuart |
Subject Matter | Library & information science,Information behaviour & retrieval |
Finding “good enough”
metrics for the UK’s research
excellence framework
David Stuart
Centre for e-Research, King’s College London, London, UK
Abstract
Purpose –The purpose of this paper is to encourage discussion about the potential role of metrics in
research assessment.
Design/methodology/approach –The paper considers the results of the UK’s 2014 Research
Excellence Framework, and the potential of metrics to reduce the size of future exercises.
Findings –The paper suggests that a battery of non-robust metrics selected by the higher education
institutions could support a greatly reduced peer-reviewed exercise.
Practical implications –Policy makers should reconsider the role of metrics in research assessment.
Originality/value –The paper provides an alternative vision of the role of metrics in research
assessment, and will be of interest to policy makers, bibliometricians, and anyone interested in HEI
research assessment.
Keywords Scientometrics, Research Excellence Framework, Research assessment,
Research metrics, Altmetrics
Paper type Viewpoint
This first Taming Metrics viewpoint conveniently coincides with the results of the UK’s
Research Excellence Framework (REF), a high point in the scientometric calendar.
As UK university marketing departments are busily cutting and splicing units of
assessments with increasingly refined geographic units so that each can claim their
areas of excellence, scientometricians of all types (bibliometric/webometric/altmetric)
will be poring over the results to determine how their own discipline could have
contributed to the process at a fraction of the cost.
Pressures on research funding have made research assessment an essential part of
research funding distribution, but as funding decreases and the research assessment
costs increase, and seem likely to continue rising with the increased emphasis on and
future refinement of impact (Martin, 2011), then the appropriateness of the current
system must be questioned and alternatives explored. A light-touch metrics-informed
approach was initially suggested for the REF (HM Treasury, 2006), but unfortunately it
did not come to pass, as initial studies found the use of bibliometric indicators too
limited, especially away from the sciences.
Good enough might just be enough
The good news for bibliometricians, however, is that, as the cost of the REF increases,
so do the potential benefits from any new metric, and the search for more robust
indicators will continue as an ever-increasing variety of data sources becomes available
for investigation. As Li et al. (2010, p. 554) have noted, “The growth of interest in
citations has mirrored the growth in provision of databases that enable such
evaluations to take place”, and the sources of data available today are seemingly
endless. But perhaps the search for increasingly robust indicators is a fool’s errand, and
instead we should be focusing on a collection of those indicators that are “good
Online Information Review
Vol. 39 No. 2, 2015
pp. 265-269
©Emerald Group Publis hing Limited
1468-4527
DOI 10.1108/OIR-01-2015-0021
Received 15 January 2015
First revision approved
19 January 2015
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1468-4527.htm
265
Finding “good
enough”
metrics
To continue reading
Request your trial