Journal ranking and the dreams of academics

Pages827-830
Published date07 August 2009
Date07 August 2009
DOIhttps://doi.org/10.1108/14684520910985747
AuthorJohn W. Lamp
Subject MatterInformation & knowledge management,Library & information science
AT THE SHARP END
Journal ranking and the dreams
of academics
John W. Lamp
School of Information Systems, Deakin University, Geelong, Australia
Abstract
Purpose – The purpose of this article is to review and comment on the Australian Government’s
entry into the journal ranking domain.
Design/methodology/approach – A review and reflection on the approach and potential impact of
the direction taken.
Findings – This project is arguably the largest of its type and the effects on academic publishing and
the survival of journals could be far reaching.
Originality/value – The article draws together current material on the Australian Government’s
activities and provides details of the scope of the journal ranking project.
Keywords Australia, Serials,Publishing
Paper type Viewpoint
There’s probably no topic within academia that generates such universal passion as
academic freedom. During the twentieth century and since, the greater role in funding
by governments, along with the catch cry of fiscal responsibility, has lead to steadily
increasing demands on academics to justify and compete for the use of resources.
However, once undertaken, academics can pick and choose their place of publication.
To be sure, there are preferred journals, some with higher prestige than others an
issue to be considered when tenure or promotion is talked about. Other jour nals might
be smaller or less prestigious, but are icons within particular disciplines. Actual journal
rankings are probably limited to faculties or disciplines and not a major metric.
Journal ranking is something that has probably been with us, in one form or
another, since there was more than one journal. Until the twentieth century, most of
these arguments took place in the context of some form of learned discussion by
eminent people. This led to the propounding of various views, the authority for which
boiled down to “I think ...”, or where there was more than one individual involved,
“my friends and I think ...”.
In 1934, Bradford, motivated by a concern over the adequacy of coverage of topics
by abstracting services, published his investigations into statistical analyses which
could be used to determine the degree of coverage of a discipline by a particular journal
collection (Bradford, 1934). From this initial work Eugene Garfield developed the idea
of an impact factor in 1955 and began to use it in the 1960s to select journals for the
Science Citation Index (Garfield, 2005), now subsumed into Thomson ISI. Over the
years, much work has been done to understand this measurement and to enhance and
fine tune it in many ways. Yet it has always perplexed me that a measurement
intended to determine the lack of coverage of a particular collection developed into a
reason for excluding material from consideration.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1468-4527.htm
Journal ranking
827
Online Information Review
Vol. 33 No. 4, 2009
pp. 827-830
qEmerald Group Publishing Limited
1468-4527
DOI 10.1108/14684520910985747

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT