Are we doing the right thing?. Food for thought on training evaluation and its context

DOIhttps://doi.org/10.1108/00483481011017390
Date09 February 2010
Pages162-177
Published date09 February 2010
AuthorAntonio Giangreco,Andrea Carugati,Antonio Sebastiano
Subject MatterHR & organizational behaviour
Are we doing the right thing?
Food for thought on training evaluation and its
context
Antonio Giangreco
IESEG School of Management, Lille, France
Andrea Carugati
Aarhus School of Business University and IESEG School of Management,
Aarhus, Denmark, and
Antonio Sebastiano
Universita
`Carlo Cattaneo LIUC and IESEG School of Management,
Castellanza, Italy
Abstract
Purpose – This paper aims to advance the debate regarding the use of training evaluation tools,
chiefly the Kirkpatrick model, in reaction to minimal use of the tools reported in the literature and the
economic changes that have characterised the industrialised world in the past 20 years.
Design/methodology/approach – The main argument – the need to design new evaluation tools
emerges from an extensive literature review of criticism of the Kirkpatrick model. The approach is
deductive; the argument emerges from extant literature.
Findings The main findings of the literature review show that the major criticisms of the
Kirkpatrick model, though rigorous, are not relevant in today’s post-industrial economy. Issues of
complexity, accuracy and refinement, which are relevant in stable industrial organisations, must be
revised in the new economic world.
Research limitations/implications This paper is based on a literature review and presents a call
for new research. As such, it is not grounded in original empirical evidence, beyond that presented in
the cited articles.
Practical implications The paper calls for training evaluation tools that align better with modern
organisational reality. If the research community responds to this call, the results will benefit
practitioners directly. This paper also presents practical advice about the use of existing evaluation
techniques.
Originality/value – A new angle on criticisms of existing training evaluation systems does not
reiterate classic criticisms based on logic and mathematics but rather takes a pragmatic and economic
approach. Thus, this paper offers evidence of theoretically grounded paradoxes of the consequences of
existing criticisms of training evaluation.
Keywords Training evaluation,Human resource management, Organizational behaviour
Paper type Viewpoint
1. Introduction
When organisational behaviour and human resource management scholars approach
the topic of training evaluations, they agree on a starting point: Kirkpatrick’s
Hierarchical Model of Training Outcomes (1959a, b, 1960a, b, 1967, 1996a). According
to Kirkpatrick’s goal-oriented model, reactions, learning, behaviour and results
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0048-3486.htm
PR
39,2
162
Received 4 April 2008
Revised May 2008
Accepted 28 November 2008
Personnel Review
Vol. 39 No. 2, 2010
pp. 162-177
qEmerald Group Publishing Limited
0048-3486
DOI 10.1108/00483481011017390
represent the four levels that explain the efficiency, effectiveness and quality of
training. Level 1, or reactions, refers to the emotional responses of trainees to the
training programme and therefore does not take into account any measure of learning.
Level 2, learning, pertains to the logics, methodologies and techniques acquired by
trainees, without considering their job-related application. Level 3, or behaviour,
relates to the real usage of new principles and practices learned by trainees to modify
and improve their behaviour and performance at work. Finally, level 4, results, entails
the impact of training on, for example, costs, productivity, quality or morale,
depending on the desired results of the training. The model is hierarchical, because the
perceived satisfaction of trainees (reactions) has an impact on the inclination to study
and learn, which in turn can drive behaviour to the extent that it gene rates
organisational results. During the almost half a century since its first version, the
Kirkpatrick model has significantly influenced the development of other models (e.g.
Warr et al., 1970; Hamblin, 1974; Cannon-Bowers et al., 1995; Kaufman et al., 1995;
Molenda et al., 1996; Phillips, 1997, 2003; Cascio, 1999) and dominated debate on the
topic (e.g. Noe, 1986; Plant and Ryan, 1992; Olsen, 1998; Warr et al., 1999; Blanchard
et al., 2000; Tracy et al., 2001), as well as stimulating contributions from many
practitioners (e.g. Swanson and Sleezer, 1987; Talbot, 1992; Mann and Robertson, 1996;
Albernathy, 1999; Toplis, 2001; Tyler, 2002). In a similar way, it has prompted a certain
amount of criticism (e.g. Clement, 1982; Alliger and Janak, 1989; Tannenbaum and
Woods, 1992; Brown, 2005; Sitzmann et al., 2008), including Holton’s (1996) attack and
Kirkpatrick’s (1996b) response.
For almost half a century, the Kirkpatrick model also has been the focal point of
discussions about how and why training should be evaluated, and research has tried to
extend it in different directions. Nevertheless, modern society and the economic
infrastructure in which we move today are much different than those in which
Kirkpatrick developed his model. Today, the principal agent of the economy is not the
corporation, as it was 50 years ago. New technologies and social emancipation give the
individual a much more dominant role in the economic playing field (Friedman, 2005).
Tightly coupled organisational systems, which characterised the time when the first
training evaluation models were developed, have been replaced by loosely coupled
networks of individuals, in which mobility is both a skill and a requirement. New terms
also characterize the economy, such as the e-lance economy (Malone and Laubacher,
1998), the knowledge economy, or the post-industrial economy (Sculley and Byrne,
1987). In this emerging world, the knowledge worker is the centre of productivity and
economic life and must be nurtured, challenged and constantly pleased to remain with
an organisation (Alvesson, 2000). In this world, the individual also must constantly
learn, just as organisations must become learning organisations (Senge, 1990).
Learning (and training) is the key to survival, but are the traditional ways to measure it
still valid and relevant?
The effects on training, its meaning and its evaluation of the societal changes that
continue to emerge largely have been ignored by the research community. Yet
practitioners likely have incorporated these changes into their modus operandi,
without reconsideration of the reasons and the impacts of their decisions.
This article presents a discussion of training evaluation that takes into account the
consequences of socio-economic changes, with the aim of opening a debate about the
future challenges, for researchers and practitioners alike, when dealing with training
Are we doing the
right thing?
163

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT