Scholarly book publishers’ ratings and lists in Finland and Spain. Comparison and assessment of the evaluative potential of merged lists

Published date19 November 2018
DOIhttps://doi.org/10.1108/AJIM-05-2018-0111
Date19 November 2018
Pages643-659
AuthorJorge Mañana Rodriguez,Janne Pölönen
Subject MatterLibrary & information science,Information behaviour & retrieval,Information & knowledge management,Information management & governance,Information management
Scholarly book publishersratings
and lists in Finland and Spain
Comparison and assessment of the evaluative
potential of merged lists
Jorge Mañana Rodriguez
CSIC, Madrid, Spain, and
Janne Pölönen
Federation of Finnish Learned Societies, Helsinki, Finland
Abstract
Purpose The purpose of this paper is twofold: first, to compare the lists of publishers in SPI (Spain) and the
lists of VIRTA (Finland), in order to determine some of the potential uses of a merged list, such as
complementing each other; and, second, to assess the effects of cross-field variability in the SPI rankings on
the potential uses identified in the previous objective.
Design/methodology/approach VIRTA and SPI lists were matched and compared in terms of level and
number of submissions (VIRTA) and prestige (SPI).
Findings There is a set of international publishers common to both information systems, but most
publishers are nationally oriented. This type of publisher is still highly relevant for scholars. Consequently, a
merge of national lists would provide useful information for all stakeholders involvedin terms of grounding
information for the rating of foreign, non-international publishers. Nevertheless, several issues should be
considered in an eventual merging process, such as the decisions related to the use of field-specific rankings
or general rankings.
Practical implications If merged, ratings ought to be kept separately. Ratings of national publishers can
be imputed in other systemsevaluation process, thus making the merging process potentially useful.
Originality/value This research explores obstacles and opportunities for merging scholarly publishers
lists from an empirical perspective. It provides groundwork for future efforts toward supra-national
combinations of publisherslists.
Keywords Books, Research evaluation, SPI, Lists merge, Scholarly publishers, VIRTA
Paper type Research paper
Introduction
Scholarly books tend to be an important productofresearchthatiscarriedoutinthe
humanities, but also often in the social sciences (Giménez-Toledo et al., 2016). This implies
that it is necessary to take scholarly books into account in the evaluation process. This is
done in different forms in several European countries, considering that the evaluation of
scholarly books is ingrained in a wider evaluation system with specific objectives, usually
set at the national level, the models vary considerably among countries in consonance
with those objectives and other factors (Kulczycki et al., 2018). The evaluation of
individual books through extensive, detailed reading of the contents of each title under
evaluation by a panel of experts is costly and time consuming (e.g. in the case of the UKs
REF 2014, Rosenberg, 2015). Therefore, in several evaluation systems and funding
schemes, whether based on quantitative or qualitative methods, the publisher is used as a
proxy for the quality of the book.
The procedures for the evaluation of a publishing house differ considerably. While
citation analysis has limited application in case of book publishers (Zuccala et al., 2015),
their evaluation in national level systems is usually based on survey or expert opinion. In
Spain, the product Scholarly Publishers Indicators (SPI) (SPI, 2018) contains, among other
developments, rankings of publishers according to their prestige as perceived by scholars
Aslib Journal of Information
Management
Vol. 70 No. 6, 2018
pp. 643-659
© Emerald PublishingLimited
2050-3806
DOI 10.1108/AJIM-05-2018-0111
Received 18 May 2018
Revised 28 August 2018
21 September 2018
Accepted 2 October 2018
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2050-3806.htm
643
Evaluative
potential of
merged lists
in sixteen social sciences and humanities (SSH) fields. These rankings summarize the
results of a survey in which around 2,700 scholars provided, as a response, a list of (up to)
the ten most prestigious publishers in their respective fields. In the countries (Norway,
Finland, Denmark and Flanders) following the so-called Norwegian model (Sivertsen,
2016; Sivertsen and Larsen, 2012), publication channels are classified in a number of
levels, reflecting the distinction between peer-reviewed and not peer-reviewed outlets, as
well as different degrees of quality (both in the case of journals and scholarly book
publishers) by expert panels, specialized in different fields. In Finland, the JUFO
classification of publication channels (Auranen and Pölönen, 2012) is produced by 23
evaluation panels whose main task is the evaluation of the publication channels in order to
provide them with a level rating.
A relevant difference between the two systems is the fact that in Spain, the information
concerning scholarly publishers is used for evaluations at the level of individual researchers,
while in the case of Finland and, in general, in the countries following the Norwegian model,
it is used in a performance-based funding scheme (PRFS) of Higher Education Institutions
(Hicks, 2012). In Spain (ANECA, 2008), the information on publishers is supplemented with
further review of the individual titles by expert panels in the context of the applicants CV,
while in the Norwegian model the funding scheme is based solely on bibliometric indicator
supported by the publication channel rating.
The existence of national level information system in Finland, and other countries using
the Norwegian model, is another important difference, implying that there is comprehensive
publication data in their case but not in the case of Spain. This also leads to different designs
for the composition of the book publisher lists in the two national systems. SPI covers only
such Spanish and international publishers that researchers participating in the survey have
indicated among the top 10 list. Publication Forum covers all national and international
publishers actually used as outlets by Finnish researchers, as well as other international
publishers considered relevant from the Finnish research perspective.
Another relevantdifference between the two databases is the use of field-specific rankings
in the case ofSPI. In case of Finland, the ratingis understood to reflectthe average quality and
prestige of publishersfrom the perspective of all fields,while in Spain the publisher prestigeis
field specific. Since the evaluation in the countries following the Norwegian model is carried
out for PRFS at the institutional level,there is no differentiation of the quality of publication
channels in different fields: each publication channel is taken into account with a single
quality level, from which a fixed number of points is derived in the evaluation process
(combining quality level and publication type) regardless of the field of thepublication. In the
case of SPI, however, where the evaluation takes place at the individual level, there are
field-specificclassifications for the same publishers and a publisher canhave different values
of perceived prestige in the different fields. The differences in the prestige of a publisher
across the different fields come from the variability of the responses from specialists in
different fields so that, for example, the same publisher can be considered highly prestigious
by specialists in history but not prestigious by specialists in philosophy.
State of the art in merged lists and rationale for the study
In recent years, several initiatives have started to be developed in the direction of a future
integration of European research databases. As pointed out in Puuska et al. (2018, p. 1):
An Expert Group on Assessment of University-Based Research recommended in a report to the
European Commission that it should Invest in developing a shared information infrastructure for
relevant data to be collected, maintained, analyzed, and disseminated across the European Union
(European Commission, 2010). A report to the European Parliamentary Research Service (2014),
Comparing Research in So, recommends development of a European integrated research
information system inter-connecting the existing national research information systems.
644
AJIM
70,6

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT