Evaluation Research and Criminal Justice: Beyond a Political Critique

AuthorMax Travers
Published date01 April 2005
Date01 April 2005
DOIhttp://doi.org/10.1375/acri.38.1.39
Crim38.1-text.final.x Evaluation Research and Criminal Justice:
Beyond a Political Critique
Max Travers
University of Tasmania, Australia
This article is intended to stimulate reflection and debate about the
relationship between pure and applied research in criminology. The
central argument is that evaluation research, which has almost become a
dominant paradigm in researching criminal justice, has lower methodolog-
ical standards than peer-reviewed social science. It considers this case in
relation to quantitative and qualitative methods, and examines examples
of a ‘flagship’ and ‘small-scale’ evaluation. The article concludes by
discussing the implications for evaluators (who are encouraged to employ
a wider range of methods), funding agencies and criminology as an acade-
mic discipline.
There has been considerable disquiet among critical criminologists in both
Australia and the United Kingdom about the rise of evaluation as a research
paradigm (Hillyard, 2001; Israel, 2000; O’Malley, 1996; White, 2002). It has been
suggested that this has a managerial bias, and serves the needs of the powerful; and
that there are tremendous institutional and financial pressures to conduct this kind
of research. It is not, however, often recognised that academics not known for their
political radicalism (Hood, 2001; Pawson & Tilley, 1997) and many professional
evaluators are also concerned about the type of research being done in this field.
This article seeks to unpack this issue, in a provisional way, by focusing on a
complaint that is not directly political. This is the charge that most evaluation
research is methodologically poor, and intellectually uninteresting, when assessed by
the standards employed in the academic peer-reviewed disciplines of criminology and
sociology. The paper will consider this criticism in relation to quantitative and quali-
tative methods. It will also examine two examples of evaluation research: a well-
funded ‘flagship’ project and a ‘small-scale’ evaluation conducted for a local agency.
This review of the methodological deficiencies of evaluation (which are acknowl-
edged in the evaluation literature) raises disturbing issues for both academic criminol-
ogists and evaluators. In the first place, it suggests that applied research does not have
to be rigorous, in academic terms, to be useful; so claims that evaluation is a robust,
scientific discipline that produces ‘objective’ findings cannot be sustained. However, it
also raises difficult questions about method for criminologists, as many academic
studies use similar methods, but with a different political slant.
Address for correspondence: Max Travers, School of Sociology and Social Work, University of
Tasmania, private bag 17, Hobart, Tasmania 7001, Australia. E-mail: max.travers@utas.edu.au
THE AUSTRALIAN AND NEW ZEALAND JOURNAL OF CRIMINOLOGY
39
VOLUME 38 NUMBER 1 2005 PP. 39–58

MAX TRAVERS
The Political Critique of Evaluation Research
The main argument advanced by critical criminologists is that evaluation research
serves the needs of the powerful and has a managerial bias. Those teaching criminal
justice courses may have some sympathy with this critique as they often rely heavily
on evaluation reports.1 These always present an upbeat picture of organisations
struggling with and overcoming problems in a process of ‘continuous improvement’
that must reflect the views of those who commissioned the research, rather than
cynical and disaffected practitioners on the ground. Academics sometimes complain
that their reports are shelved or censored because they produce unpalatable findings
or recommendations (see, e.g., Morgan, 2000; White, 2001). One can also easily
imagine how researchers practise a form of self-censorship by steering clear from
anything that might be controversial or damaging to the sponsoring agency. Reports
do not, for example, contain lengthy interviews with staff about the grievances that
inevitably arise from successive cut-backs or organisational changes, or reveal
personality conflicts within management teams, or document abuses of power or
entrenched racist or sexist attitudes inside institutions.2 They also never criticise
government policy, although one would expect practitioners and managers to have
a range of political views. Academics working as consultants have sometimes criti-
cised the implementation or success of evaluations (e.g., Hope, 2003), but they
never question the assumptions that inform government policy.
The second argument made by the critics, that there are considerable institu-
tional pressures on sociologists and criminologists to do evaluation research, is also
persuasive.3 Here one might add from a British perspective that many excellent
theoretically informed studies were published, especially during the late 1960s and
1970s, about public sector organisations that are not evaluations. Many of these are
highly critical and have generated healthy political debates about public policy
(e.g., Baldwin & McConville, 1977; Cain, 1973; Carlen, 1976; Cohen & Taylor,
1970). It is interesting to consider whether it was easier to pursue independent
research about official agencies, given that academics were not competing with
evaluators for the limited time of practitioners and managers, and perhaps also
because institutions were less sensitive about politically motivated criticism. One
should not, however, discount the difficulties faced by sociologists and criminolo-
gists then and now in publishing even mildly critical findings, or the fact they
might have to conceal their objectives to obtain access.4
Although there is some basis for these arguments, they do not fully explain the
hostile or dismissive reaction that many academics have towards evaluation
research. For one thing, many academics with conservative views, or who are
uninterested in politics, are equally troubled by the pressures placed on universities
to become entrepeneurial (Marginson & Considine, 2000) and to do ‘useful’
research. In fact, the principal objection of critical researchers is, arguably, not the
managerial or political bias of evaluation research, but the fact that it has lower
methodological standards than the academic peer-reviewed disciplines of criminol-
ogy and sociology, and does not offer the same intellectual interest or fulfillment as
traditional academic work.5 The next part of this article will explore this criticism
in relation to quantitative and qualitative methods.
40
THE AUSTRALIAN AND NEW ZEALAND JOURNAL OF CRIMINOLOGY

EVALUATION RESEARCH AND CRIMINAL JUSTICE
The Charge of Low Methodological Standards
Before making some critical comments on standards, it is important to make a
distinction between two types of evaluations. ‘Flagship’ evaluations are well
resourced, involve the collection and analysis of data over many years, and generate
academic publications alongside reports for government agencies. ‘Small-scale’
evaluations, on the other hand, are conducted for local agencies or programs, and
researchers are usually required to submit what in another field is termed ‘quick and
dirty’ findings in a matter of weeks.6 Robson (2000) describes some of these as
‘pseudo-evaluations’, which agencies are required to conduct, but no one takes
seriously or expects to result in changes in service delivery. One can make the
strongest case for low methodological standards by focusing on ‘small-scale’ evalua-
tions, although researchers working on ‘flagship’ projects also often have to make
compromises in conducting applied research.
The view that evaluation research does not need to meet the same methodologi-
cal standards as peer-reviewed academic research is not something anyone would
dispute, and is not intended in this paper as a snobbish criticism of evaluators for
not meeting these standards. It is clear from reading the numerous practical guides
on how to conduct evaluations that most aspire to conduct rigorous social scientific
research.7 However, it is acknowledged that the need to provide organisations with
recommendations that they can understand, and in a timely fashion within a
budget, makes this difficult in practice. As Carol Weiss notes, the skill lies in
making ‘research simultaneously rigorous and useful’ while ‘dealing with the
complexities of real people in real programs run by real organisations’ (Weiss, 1998,
p. 18). One should not expect to pursue cutting-edge intellectual questions, or
employ state-of-the-art methods in an evaluation; and one does not find this in
published reports.
To make the same point differently, one can become an evaluator without
having taken academic courses in social science research methodology. There are,
for example, civil servants in British government inspectorates who produce
thematic reports about the criminal justice system. These are sophisticated
examples of ‘evidence-based’ research which are presented to senior politicians. The
authors have often picked up these skills on the job, and the reports make no refer-
ence to the methodological issues that would interest academic social scientists.8
Similarly, the research methods texts in this field are often aimed at practitioners or
managers, and convey the basics of how to design a survey or conduct research
interviews, without going into the technical issues and debates one would find in an
undergraduate course. There is usually no discussion about different models of
explanation in social science, and it is assumed that one can describe outcomes and
organisational processes, and in many respects let the facts speak for...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT