Evaluation of Crime Prevention: Escaping the Tunnel Vision on Effectiveness

Published date01 May 2016
AuthorDIETER BURSSENS
DOIhttp://doi.org/10.1111/hojo.12155
Date01 May 2016
The Howard Journal Vol55 No 1–2. May 2016 DOI: 10.1111/hojo.12155
ISSN 2059-1098, pp. 238–254
Evaluation of Crime Prevention:
Escaping the Tunnel Vision
on Effectiveness
DIETER BURSSENS
Researcher, National Institute of Criminalistics and Criminology,
Federal Public Service Justice, Belgium
Abstract: Within the field of crime prevention we have for decades already been provided
with numerous research projects that study the effectiveness of crime preventionmeasures.
Effectiveness is, without a doubt, a crucial element when evaluating these measures. Un-
fortunately, other important aspects of crime prevention evaluation are often overlooked,
or are hardly ever subjected to criminological scientific research. In this contribution we
highlight key components that are needed to develop a fully-fledged cost-benefit analysis,
which is a vital tool for decision makers to avoid choosing prevention measures where
benefits are cancelled out by unexpected, or unknown, side effects.
Keywords: crime prevention; evaluation; evidence-based practice; propor-
tional prevention
Recent Developments in the Evaluation of Crime Prevention
The fact that initiatives in the area of crime prevention are often the subject
of thorough evaluation is entirely normal. Implementation of preventive
measures is not a matter of choice. It goes without saying that there must
be the necessary benefits to outweigh the time and resources that have
to be invested. Evaluations are performed for various reasons and can
occur at several times (De Peuter, De Smedt and Bouckaert 2007). The
evaluation is very often only carried out in order to assess a measure that
has been introduced, mainly with regard to justification and adjustment.
An evaluation may also be carried out earlier in the process during the
implementation stage, with a view to improving that introduction or im-
plementation. However, an evaluation may also be carried out to prepare
for a policy or practical decision. In that case we need to verify whether
a preventive measure is useful and feasible, or whether we have to make
a choice between several possible initiatives or strategies. In general, the
evaluation of a programme can assess different domains: the need for the
programme; the programme’s design and its implementation; its impact
or outcomes; and its efficiency (Rossi, Lipsey and Freeman 2004).
238
C
2015 The Howard League and John Wiley & Sons Ltd
Published by John Wiley & Sons Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK
The Howard Journal Vol55 No 1–2. May 2016
ISSN 2059-1098, pp. 238–254
Scientific research undoubtedly makes an important contribution to
the evaluation of measures with a view to the prevention of crime. This is
mainly a matter of investigating the effectiveness of such measures, or, in
other words, the degree to which an initiative reaches its objectives. Such
research has received significant stimuli during recent decades in order
to improve the research methods used, among other things thanks to the
work of the Campbell Collaboration, which seeks to achieve policy decisions
whereby systematic, explicit and judicious use is made of the best available
evidence by introducing an evidence-based policy and practice framework
(Davies and Nutley 1999; Davies, Nutley and Smith 2000). To this end,
meta-analyses are performed and systematic reviews set up, whereby re-
search programmes are assessed and scored depending on how they fulfil
the criteria of an experimental research set-up (Farrington et al. 2006).
Yet such a research set-up for the study of effectiveness is not without its
faults, according to Pawson and Tilley (1994). They note that the results
of such research are often self-contradictory and that it usually turns out
that they cannot be replicated outside the specific context in which the
research was carried out. Interventions appear to create different effects
depending on the context in which they are applied and the same inter-
vention can elicit different effects with different individuals (Tilley 2002).
With the introduction of the ‘scientific realist strategy’, Pawson and Tilley
(1994) strive towards further improvement of research methodology for
effectiveness studies. In this regard they are aiming at research set-ups that
are far more theory-driven, and attempts are made to empirically map out
the underlying explanatory mechanisms of a theory and the influence of
environmental features on the effectiveness of prevention programmes.
In themselves, these are very positive developments that help us to ob-
tain a more accurate view of which prevention initiatives actually have an
effect, or at least which are promising and which are not. This ensures that
supporters of evidence-based prevention also speculate out loud about sci-
entific effectiveness research having a greater impact on policy or practice
decisions that have to be taken (see, among others, Leicester 1999; Nutley
and Davies 1999; Pawson 2006). But therein lies a major problem. There
is too much tunnel vision these days when evaluating measures, whereby
evaluation research only has an eye on the effectiveness of initiatives
(English, Cummings and Straton 2002), whether or not in relation to their
financial price ticket. The literature often uses terms such as ‘evaluation’ or
‘evaluation research’ for what is actually only an evaluation of, or research
into, the effects (see, among others, Pawson and Tilley 1994; Rossi, Lipsey
and Freeman 2004; Tilley 2002). On the one hand, this can be attributed
to the developments referred to above. The stimuli for reinforcing the
research methodology ensure renewed attention and an increase in effec-
tiveness research. On the other hand, Ellefsen (2011) argues that the rise
of New Public Management in most western countries plays an important
role in the way measures or programmes are evaluated:
[Ideas regarding management are evolving from] state-centred thinking about the
exercise of political and institutional power to a new process of steering based on
239
C
2015 The Howard League and John Wiley & Sons Ltd

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT