The eVALUEd toolkit: a framework for the qualitative evaluation of electronic information services

Pages172-175
Published date01 December 2004
DOIhttps://doi.org/10.1108/03055720410570957
Date01 December 2004
AuthorSarah McNicol
Subject MatterInformation & knowledge management
The eVALUEd toolkit:
a framework for the
qualitative evaluation of
electronic information
services
Sarah McNicol
The author
Sarah McNicol is Research Fellow, evidence base,research and
evaluation, UCE Library Service, University of Central England,
Birmingham, UK.
Keywords
Qualitative research, Project evaluation, Library users
Abstract
Purpose.
To provide an overview of the approach to EIS
evaluation taken by the eVALUEd toolkit and relate this to other
work in this area.
Design/methodology/approach.
The eVALUEd
toolkit was designed to fill a gap in EIS evaluation in relation to
qualitative techniques, user-focused evaluation and the
utilisation of evaluation findings.
Findings.
The eVALUEd toolkit
makes a distinct contribution to EIS evaluation through its focus
on people rather than resources or technology, emphasis on
qualitative methods and promotion of all aspects of the
evaluation cycle.
Research limitations/implications.
Further work
is required in relation to mixed methods of EIS evaluation and
case studies would provide greater insight into the ways in
which such data can be used in practice.
Practical implications.
There should be greater consideration given to the use of
qualitative methods of EIS evaluation as statistical data alone
are rarely sufficient for investigating complex problems and
planning and managing services. However, there is a need for
further guidance and training in this area.
Originality/value.
Aimed at library practitioners and researchers and others who
provide support with evaluation. Reports on a practical tool and
offers a balance to work focused on quantitative evaluation
methods.
Electronic access
The Emerald Research Register for this journal is
available at
www.emeraldinsight.com/researchregister
The current issue and full text archive of this journal is
available at
www.emeraldinsight.com/0305-5728.htm
Introduction
There have, undeniably, been significant changes
in academic libraries in recent years. Just 20 years
ago students would almost always have to
physically visit the library and look through the
card catalogue to locate mainly paper-based
resources which would then be issued using a
brown card system, whereas today online
catalogues, self-issue, digitised special collections
and remote access to electronic journals and other
resources are commonplace. Broadly then,
alongside an increasing emphasis on electronic
information services (EIS), there have been
significant moves towards more responsive services
and a more user-focused approach, but has library
evaluation changed in line with these
developments?
Traditional library evaluation is easily criticised
for its “if it moves, count it” approach and, sadly,
counts of users, issues and books, etc., still tend to
form the basis of many library evaluations. This is
slowly changing, with moves towards the use of
more qualitative data, such as user satisfaction and
perceptions. However, a great deal of effort is still
being devoted to what can, essentially, be
described as “counting”. MEIL2 (Brophy and
Wynne, 1997) and e-measures (E-Libraries, 2004)
are among the many projects in this area and there
are, of course, numerous local examples in
addition to large-scale initiatives such as these.
While, of course, there is a place for quantitative
data, measures such as the number of electronic
journals the library subscribes to and the number
of hits on the library web site represent only a very
small element of the total picture of the use of
electronic services in academic libraries.
For this reason, the eVALUEd toolkit, which is
the result of a three-year project funded by
HEFCE, attempts to take a different approach to
that adopted by the majority of existing models
and guidelines relating to both EIS and general
library evaluation. It is different in three important
ways. First, it places greater emphasis on
qualitative techniques such as survey, interview
and focus group questions: it could be argued that
collection of statistical data is already fairly
comprehensively covered by other projects.
Second, the focus is on people, rather than
resources: the main forms of data collection
involve consultation with users and staff to
determine their impressions, opinions and use of
electronic services. Finally, it is significant that the
eVALUEd toolkit provides support not simply
with the planning and data collection stages of
evaluation, but also with the often neglected stages
of data analysis and the practical use of findings to
improve library services.
VINE: The Journal of Information and Knowledge Management Systems
Volume 34 · Number 4 · 2004 · pp.172-175
qEmerald Group Publishing Limited · ISSN 0305-5728
DOI 10.1108/03055720410570957
172

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT