The criminal justice system as a problem in binary classification

AuthorWilliam Cullerne Bown
Published date01 October 2018
Date01 October 2018
DOIhttp://doi.org/10.1177/1365712718795548
Subject MatterArticles
Article
The criminal justice system as
a problem in binary classification
William Cullerne Bown
London, UK
Abstract
Attempts to establish a quantitative framework for thinking about the criminal justice system
have been made at least since Kaplan’s influential 1968 article. Here I avoid the probabilistic
approaches that Kaplan inspired and instead characterise the law’s underlying problem as one
of measurement. I then exploit statistical techniques developed in recent years in other
disciplines to evaluate systems that also face the challenge of ‘binary classification’ to solve it.
This approach entails the mathematisation of the criminal justice system’s core epistemic
concern of distinguishing the guilty from the innocent with Van Rijsbergen’s F-measure and
empirical measurements of effectiveness. Once one adopts the perspective of a sovereign, it
yields a meta-meta-epistemology that allows traditional arguments like those that refer to
Blackstone’s ratio to be made rigorous. This provides a clearer relationship between values
and policies and, in a narrowly epistemic sense, a complete answer to questions of evidence
and procedure.
Keywords
Blackstone’s ratio, standard of proof, meta-epistemology, F-measure, empiricism, rape
If we define nas the ratio of false acquittals to false convictions and allow X to be a policy in criminal
law, then we can see that arguments in the form ‘because n, X’ have been important in Anglo-Saxon
jurisdictions since Blackstone. Consider for example Blackstone’s dictum, that ‘ ...the law holds that it
is better that ten guilty persons escape than that one innocent suffer’. This is part of a longer statement in
the Commentaries, insisting that pre sumptive evidence should be admitted o nly cautiously and, in
particular, that two rules of thumb should be adhered to, one of which is to never convict of murder
or manslaughter unless the body can be produced. Thus the dictum is evidently part of a longer statement
that is in the form ‘because n,X.
Although his preference for nis vaguely defined, Justice Harlan’s concurring opinion concerning the
standard of proof in In re Winship in 1970 took the same form:
Corresponding author:
William Cullerne Bown, 18 Church Terrace, London SE13 5BT, UK.
E-mail: wockbah@gmail.com; https://www.quantitativejurisprudence.com
The International Journalof
Evidence & Proof
2018, Vol. 22(4) 363–391
ªThe Author(s) 2018
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1365712718795548
journals.sagepub.com/home/epj
I view the requirement of proof beyond a reasonable doubt in a criminalcase as bottomed on a fundamental
value determination of our society that it is far worse to convict an innocent man than to let a guilty man
go free.
Today, Epps (2015) reports that the same kind of reasoning is widely relied on in the United States to
justify many of the most fundamental policies of the criminal justice system. He articulates the reasoning
in the form of what he calls ‘the Blackstone principle’:
Blackstone’s ten-to-one ratio and its variations can’t be taken literally. There’s no way to measure the exact
ratio between the false convictions and false acquittals our system creates, and no one seriously advocates that
it is critical to strive for exactly ten false acquittals for every false conviction. Instead, the ratio serves as
shorthand for a less precise—but still important—moral principle about the distribution of errors: we are
obliged to design the rules of the criminal justice system to reduce the risk of false convictions—even at the
expense of creating more false acquittals and thus more errors overall.
The starting point is again Blackstone’s ratio; the conclusion, although lying outside this text itself,
we understand is a range of policies, or Xs.
The importance of this kind of reasoning is suggested by Shapiro’s account of the crisis in English law
in the early modern period when it lost access to divine insight through the medium of Christian
conscience. The sticking point then was proof, and a vital part of the response was the development
of policies that allowed the law to convince society at large that the jury retained the divine spark
(Shapiro, 1991: 241). ‘Because n, X’ arguments assist in this task by providing a kind of justification for
policies that has one foot in a modern, quantitative sensibility that was emerging then and, outside the
law, has become ever more central.
The question is, by what steps do we get from nto X? For example, from Harlan, all we get is
‘bottomed on’. Since Kaplan, attempts have been made to provide a formal answer, at least with regards
to the standard of proof, through probabilistic methods. A useful summary is provided in Walen’s recent
account of what he calls the ‘consequentialist’ approach (Walen, 2015).
There are two important points about all this work. First, consideration of all four possible
outcomes in Walen’s equation 5—a true positive (rightful conviction), false positive (false con-
viction), true negative (rightful acquittal) or false negative (false acquittal)isnarrowedtojustthe
false negatives and false positives (equation 6).
1
This narrowing move canbeachievedbyassum-
ing that the true positives and true negatives have either no value or the exact opposite value of
their false counterparts (DeKay, 1996: 116). The justifications given for this simplification vary
from author to author. Kaplan’s original paper, still cited without comment by for example Stein,
considered the move of no substantive consequence (Stein, 20 05: 172).
2
Walen, who in his turn
makes the same move, both describes principled reasons and alludes to the difficulty of making the
problem tractable otherwise (2015: 407). Second, the end result is that the standard of proof is to be
set in order to achieve a predetermined ratio of the risk of false negatives to false positives, an
objective that often now goes under the heading of ‘the distribution of errors’.
However, DeKay already conc luded in 1996 that, ‘To the exte nt that jurors’, judges’ and legal
scholars’ notions of correct standards of proof are based on desires to bring about particular error ratios,
such notions are founded on presumptions that are fundamentally invalid’ (1996: 132). DeKay’s argu-
ment remains unrefuted and, for separate reasons that this article is too short to contain, I consider his
1. Walen (2015: 359). This yields an equation in a form that will be familiar to anyone who has read Kaplan or the work of his
followers: SOP ¼1/[1 þV
AG
/V
CI
], where V
AG
is the value of acquitting the guilty, and V
CI
the value of convicting the
innocent.
2. What Kaplan says is: ‘For convenience we will deal not directly with utilities but disutilities, since the problem is more easily
phrased in terms of avoiding certain consequences than in terms of achieving others’ (Kaplan, 1968: 1071).
364 The International Journal of Evidence & Proof 22(4)

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT