A multidimensional analysis of Aslib proceedings – using everything but the impact factor

Date15 July 2014
DOIhttps://doi.org/10.1108/AJIM-11-2013-0127
Pages358-380
Published date15 July 2014
AuthorStefanie Haustein,Vincent Larivière
Subject MatterLibrary & information science,Information behaviour & retrieval
A multidimensional analysis
of Aslib proceedings – using
everything but the impact factor
Stefanie Haustein and Vincent Larivie
`re
E
´cole de bibliothe
´conomie et des sciences de l’information,
Universite
´de Montre
´al, Montre
´al, Canada
Abstract
Purpose – The purpose of this paper is to show that the journal impact facto r (IF) is not ableto reflect
the full impact of scholarly journals and provides an overview of alternative and complementary
methods in journal evaluation.
Design/methodology/approach – Aslib Proceedings (AP) is exemplarily analyzed with a set of
indicators from five dimensions of journal evaluation, i.e. journaloutput, content, perception and usage,
citations and management to accurately reflect its v arious strengths and weaknesses beyond the IF.
Findings – AP has become more international in terms of authors and more diverse regarding its
topics. Citation impact is generally low and, with the exception of a special issue on blogs, remains
world average. However, an evaluation of downloads and Mendeley readers reveals that the journal is
an important source of information for professionals and students and certain topics are frequently
read but not cited.
Research limitations/implications – The study is limited to one journal.
Practical implications – An overview of various indicators and methods is provided that can be
appliedin the quantitative evaluation of scholarlyjournals (and alsoto articles, authors andinstitutions).
Originality/value – After a publication history of more than 60 years, this analysis takes stock of AP,
highlighting strengths and weaknesses and developments over time. The case study provides an
example and overview of the possibilities of multidimensional journal evaluation.
Keywords Citation analysis, Scholarly communication, Usage statistics, Impact factor,
Journal evaluation, Mendeley
Paper type Case study
1. Introduction
After a publication history of almost 65 years and a recent title change to Aslib Journal
of Information Management, it seems timely to take stock of AP by analyzing it with
the help of bibliometric methods. When it comes to evaluation of scholarly journals,
the impact factor (IF), developed in the early 1960s by Eugene Garfield and Irving Sher
at the Institute for Scientific Information as selection criteria for journals to be covered
by their famous Science Citation Index (SCI) (Garfield and Sher, 1963), is usually
brought up. However, the IF is well known for its flaws including its short citation
and publication windows, the asymmetry of document types between numerator and
denominator, its inability as a means to represent skewed distributions and its field
dependence (Archambault and Larivie
`re, 2009; Haustein, 2012), not to mention its
misuse for author evaluation. Thus this study will apply a multidimensional approach
to journal evaluation, showing that the IF is not able to fully represent the impact
of a journal and citation-based indicators tell only part of the sto ry (Rowlands
and Nicholas, 2007). Based on the concept introduced by Grazia Colonia (2002) and
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/2050-3806.htm
Received 29 November 2013
Revised 15 February 2014
Accepted 20 February 2014
Aslib Journal of Information
Management
Vol. 66 No. 4, 2014
pp. 358-380
rEmeraldGroup PublishingLimited
2050-3806
DOI 10.1108/AJIM-11-2013-0127 Vincent Larivie
`re acknowledges funding from the Canada Research Chair program.
358
AJIM
66,4
Juchem et al. (2006) and further developed by Haustein (2012), Aslib Proceedings (AP)
is analyzed with various indicators from five dimensions of jou rnal evaluation, namely,
journal output, content, perception and usage, citation s and management. AP is used
as an example to show that multidimensional analyses will reveal strengths and
weaknesses of a journal from various perspectives. We demonstrate differences between
author and reader communities, developments of the jou rnal and subjects over time and
show that although some topics are not frequently cited, they have high impact based on
the number of downloads.
Four stakeholders, each with different needs in the process of evaluating and ranking
scholarly journals, can be identified; namely, readers selecting a source of information,
authors choosing a publication venue, librarians managing a collection, and editors
and publishers evaluating own and competing periodicals (Garfield, 1972; Todorov and
Gla
¨nzel, 1988; Moed et al., 1998; Nisonger, 1999; Gla
¨nzel and Moed, 2002). Depending on
the stakeholder’s requirements, one indicator might be more suitable than another and the
same set of periodicals might even be ranked differently from the readers’ and authors’
perspectives (Mabe and Amin, 2002; Rousseau, 2002; Haustein, 2012).
The evaluation of scholarly journals based on citations is not new. It goes back to
Gross and Gross (1927) and Bradford (1934), who introduced reference-based analyses
to improve local library collection management. With the SCI, citation-bas ed journal
evaluation was applied on a much larger scale (Garfield, 1955). Originally developed as
a tool to help select journals to be included in the SCI database, the IF soon be came
the first widely applied bibliometric indicator and a cure-all metric powerful enough
to influence scientific communication. Today the IF is both used and misused by
scholarly authors, readers, journal editors, publishers and research policy makers
alike (Adam, 2002; The PLoS Medicine editors, 2006; Haustein, 2012). As a citation rate
it captures, however, only one small asp ect of the standing of a scientific journal
(Rowlands and Nicholas, 2007). Since scholarly journals are influenced by many
different factors, their evaluation should also be multifaceted (Juchem et al., 2006;
Haustein, 2012). Along these lines, the San Francisco Declaration on Research
Assessment (DORA, 2013), urged publishers to “reduce emphasis on the journal IF
as a promotional tool [y] by presenting the metric in the context of a variety of
journal-based metrics [y] that provide a richer view of journal perfor mance.”
Various authors have addressed the need for a multidimensional approach and
emphasized that a complex concept such as scholarly impact in general and journal
impact in particular cannot be reflected by an average citation rate (Gla
¨nzel and Moed,
2002; Rousseau, 2002; Van Leeuwen and Moed, 2002; Moed, 2005; Coleman, 2007).
To preserve the multidimensionality of journal impact, information should not be
conflated into one ranking but rather a battery of indicators should be used to represent
journal impact adequately. In the following, AP will be analyzed multidimensionally;
providing an overview of the journal’s performance over the years. The methods section
provides an overview of the data sets and indicators used for each of the five dimensions,
namely, journal output, content, perception and usage, impact and management
(Haustein, 2012). After that, combined results are presented and discussed.
2. Methods
In the following, the methods and selected indicators used to analyze journal output,
content, perception and usage, citations and management are briefly introduced and
described. For a detailed literature review and extensive description of indicators
and methods, the reader is referred to Haustein (2012).
359
Multidimensional
analysis of AP

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT