Making values explicit in evaluation practice

AuthorKelly M Hannum,Amy M Gullickson
DOI10.1177/1035719X19893892
Published date01 December 2019
Date01 December 2019
Subject MatterAcademic Articles
/tmp/tmp-17Wlcy8XWSy1gg/input 893892EVJ0010.1177/1035719X19893892Evaluation Journal of AustralasiaGullickson and Hannum
research-article2019
Academic Article
Evaluation Journal of Australasia
2019, Vol. 19(4) 162 –178
Making values explicit
© The Author(s) 2019
Article reuse guidelines:
in evaluation practice
sagepub.com/journals-permissions
https://doi.org/10.1177/1035719X19893892
DOI: 10.1177/1035719X19893892
journals.sagepub.com/home/evj
Amy M Gullickson
The University of Melbourne, Australia
Kelly M Hannum
Aligned Impact LLC, USA
Abstract
Values play a fundamental role in the evaluation process; however, evaluators and
evaluation training have tended to focus on research methodology. Much less emphasis
has been placed on explicit attention to values and valuing, and the steps necessary
to justify those aspects of evaluation conclusions. In this article, we argue that to
improve evaluation practice, we need to make values an explicit part of the evaluation
process. Research done in other disciplines can offer assistance towards this end. We
first provide a general discussion of basic practical applications of value theories. Then,
we offer an example for how a particular individual values theory, Schwartz’s Theory
of Basic Human Values, can inform our work and encourage further exploration of
approaches for incorporating value theories into evaluation practice.
Keywords
evaluation, logic of evaluation, theory of basic human values, values, valuing
Introduction
A common definition of evaluation comes from Scriven (2007) ‘Evaluation is the
process of determining the merit, worth, or significance; an evaluation is the product
of that process’ (p. 1). Another definition from Stake (1977) suggests ‘Both descrip-
tion and judgement are essential – in fact, they are the two basic acts of evaluation . . .
Corresponding author:
Kelly M Hannum, Aligned Impact LLC, P.O. Box 41233, Greensboro, NC 27404, USA.
Email: kelly@alignedimpact.com

Gullickson and Hannum
163
To be fully understood, the educational program must be fully described and fully
judged’ (p. 374). No matter how you define evaluation, evaluative judgement is, or
should be, part of evaluation. There can be no judgement without an axiological stance,
that is, a theory about what is valuable and why.
For decades, values have been discussed in the literature as important to the task
of evaluation (see Greene, 2005a, 2011; House, 1980, 2014; House & Howe, 1999;
Julnes, 2012a, 2012b; Macdonald, 1979; Schwandt, 1997; Scriven, 1991, 2007,
2012; Shadish, 1998). However, the connection of values theories and approaches to
evaluation education and practice is limited. Research on evaluation education dem-
onstrates that teaching regarding values has not been explicit (King & Ayoo, in
press). A scan of evaluator competencies shows that only the Aotearoa New Zealand
Evaluation Association’s (2011) and Australian Evaluation Society Professional
Learning Committee’s (2013) lists include explicit reference to values in the evalu-
ation process and competencies related to it. If valuing is indeed at the heart of our
work, then evaluation as an emerging profession needs help connecting theory to
practice with regard to this topic. Other disciplines have conducted extensive
research on mapping and working with different types of values that can assist. In
this position paper, we offer a first step in this journey by connecting evaluation
practice to values theory work from another discipline, culminating in a brief check-
list that relates values to the various tasks in conducting evaluation. We offer one
perspective on an approach herein; further explorations of approaches for incorpo-
rating value theories into evaluation practice will be essential to continue strength-
ening connection to practice.
The importance of values in evaluation
Have you ever read a ‘Best of’ anything list and been suspicious about who deter-
mined ‘best’ and how? So have we. It is impossible to determine the best or even the
good without understanding and accounting for what is valued and by whom. Decisions
about what gets evaluated, the focus and nature of evaluation, what evidence is (or is
not) deemed credible, and who makes the narrative and judgement about merit, worth
and significance are all rooted in an axiological stance which, by definition, is not and
cannot be objective. In Figure 1, we have used dots to represent how values permeate
the evaluation process; this happens whether or not the values are explicitly acknowl-
edged. Values inform (1) which programming and evaluation efforts are deemed worth
pursuing; (2) what kinds of programme and evaluative approaches are seen as credible
and appropriate; (3) what kinds of criteria are deemed to best capture ‘value’; (4) what
kinds of data sources and types of information will be perceived as credible to support
evaluative claims, for example, what will be measured and how; (5) the most appropri-
ate and accurate methods for combining information to reach an evaluative conclu-
sion; and (6) whose perspectives matter most in the valuing process and narrative
when it comes to reporting and decision making.
Messick (1975) illustrated this in his description of how values are present in just
one aspect of evaluation – measurement (see #4 above):


164
Evaluation Journal of Australasia 19(4)
Figure 1. Values infusing the evaluation process.
Value considerations impinge upon measurement in a variety of ways, but especially when a
choice is made in a particular instance to measure some things and not others. The selection of
a subset of variables from the range of possibilities implies priorities, that for a particular
purpose some things are more important to assess than others. (Messick, 1963, 1965). (p. 962)
What ‘good’ is depends on the values employed by those making the determination,
yet there is limited evidence for evaluators employing values theories in practice.
Evaluators and evaluation training programmes have tended to focus on methods
(Gullickson et al., 2019) that describe the context, content and outcomes of pro-
grammes. Hearkening back to Stake (1977), the majority of coursework tends to focus
on specific approaches and methodologies that describe, while largely ignoring the
knowledge and skills necessary to judge what is good. Although several authors have
discussed the critical role that values play in evaluation (e.g., Greene, 2005b, 2011;
House & Howe, 1999; Julnes, 2012a, 2012b; Kirkhart, 2010; Schwandt, 1997; Smith,
1981), there has been limited uptake in applying values in evaluation practice beyond
evaluators describing their own values. Rarely do evaluations actually lay out an eval-
uative judgement and provide an evidential basis for it (Nunns et al., 2015; Roorda,
2018). In most cases, evaluators provide descriptive evidence and fall short of putting
that evidence in the context of the values driving what is deemed ‘good’. Julnes
(2012a) suggested that ‘Evaluators have often been unreflective, and even sloppy, in
their approaches to valuing’ (p. 4).
Values as part of evaluation practice
Given the central role of values in evaluation, it is curious as well as concerning that
very little attention is paid in practice to the role and explicit application of values in
the evaluation process. Notions of objectivity and validity imported or adapted from
research may have contributed to the erroneous, in our opinion, belief that values are
too subjective or amorphous to take on in an evaluation. Scriven (2012) attributes this
to the fact–value dichotomy in science that seeks facts and treats them as objective or
impartial, to reduce the contaminants and confounds of context. However, not only do
values permeate the entire evaluation process, they are essential for establishing what
good looks like for this evaluand, in this context – which is why the exact same evi-
dence in a different context, or from different stakeholders’ perspectives, can lead to a
different evaluative judgement, because ‘good’ in one context may not be ‘good’ in
another. Different evaluative judgements based on similar information in different

Gullickson and Hannum
165
contexts are not a failing of evaluation in terms of objectivity; it is instead, essential to
the nature of evaluation. Those different evaluative judgements can be established
with warranted reasoning. Thus, the transparency and quality of evidence used to sup-
port arguments, and the processes by which that evidence is synthesized into a judge-
ment about goodness, are critical to share. After all, although systematic and defensible
processes for reaching evaluative judgement are not exclusive to evaluation, they are
largely what separate evaluation from other disciplines (Crane, 1988).
Consider the following example of separating facts and values. Douglas Adams’
(1980) book the Hitchhiker’s Guide to the Galaxy conveyed the story of Deep Thought,
a computer programmed to calculate the meaning of life. The answer turned out to be
42. The number 42, may provide the ‘fact’ of the matter, and it may very well be a
high-quality bit of information, but other information, including values, is needed to
make sense of it. The Deep Thought example illustrates that, with some exceptions,
facts without context are rarely useful.
House (2001) noted that most evaluative claims are, and should be, blends of facts
and values, which implies that both aspects and their relationship to one another should
be made explicit. The...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT