A Study of Outcomes Determination Strategy in Non-profit Health and Welfare Programs Evaluation

AuthorSusan Evans,John Bellamy
DOI10.1177/1035719X1701700304
Published date01 September 2017
Date01 September 2017
Subject MatterAcademic Article
Evans, Bellamy—A study of outcomes determination strategy in non-profit health and welfare programs evaluation 23
ACADEMIC ARTICLE Evaluation Journal of Australasia Vol 17 | No 3 | 2017 | pp. 23–31
SUSAN EVANS | JOHN BELLAMY
A study of outcomes determination
strategy in non-prot health and welfare
programs evaluation
This article considers the ndings of a
survey of evaluators practising in Australian
health and welfare organisations to learn
about strategy in outcomes evaluation
design. Outcomes evaluation is identied
as an opportunity for health and welfare
programs to learn if and how dierent
practice interventions are contributing to
the achievement of desired outcomes, for
the purpose of building an evidence base
for advancing the wellbeing of people.
Results indicate that current strategy to
determine client outcomes in these sectors
is typied by reliance on program logic
models and funding specications, with
limited translation of best practice evidence
or client wellbeing preferences. We suggest
that clarity of purpose is an important
consideration for non-prot programs
embarking on evaluation design, bringing
attention to the need for skill development
and better resourcing of non-prot
organisations to support the production of
evaluations capable of contributing to best
practice evidence.
Introduction
Outcomes evaluation is one of the most significant
innovations for the management of non-profit health and
welfare services in recent times. There are two key areas of
information that outcomes evaluation studies are expected
to address: first, an accurate description of a program,
including an explicit articulation of its underlying logic
or change theory, and identification of desired program
outcomes. Secondly, measurement of what has changed
for people using a program, and the degree to which a
program was responsible for the reported or observed
changes (Goodrick, 2013). Increasingly, evaluation of
health and welfare programs involves measurement of
outcomes pre-determined by funders (Salter & Kothari,
2014). Managing outcomes evaluations to service current
funding and reporting requirements is a significant process
in most government funded organisations, with increasing
literature on how organisations are taking action to develop
evaluation capacity (see Herbert, 2015; McCoy, Rose &
Connolly, 2013).
It is worth highlighting that increasing requirements
for non-profit organisations to demonstrate program
eectiveness through evaluation is occurring in an
unevenly resourced sector (Alston, 2015; Fitzgerald,
Rainnie, Goods & Morris, 2014). In larger or better
resourced organisations, skilled evaluators who are often
working in teams have a better chance of developing an
organisation’s ‘evaluation literacy’, and are better placed
to demonstrate the value of programs to gover nment
through evaluation reporting. Smaller org anisations
may struggle with the rudiments of evaluation design
and data collection, have less capacity to design robust
evaluation, and are often worse o in their capacity to
demonstrate the value of their programs. Recognition
of the discrepancy in organisational capacity to conduct
program evaluation is driving small-scale collaborative
eorts across some smaller and medium sized
organisations sharing evaluation know-how, as evidenced
in networks we are involved with, including the NSW
Non-Government Organisation Research Forum and the

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT