Using Root Cause Analysis for Evaluating Program Improvement

Published date01 September 2012
AuthorRalph Renger,Abigail Akande,Rebekah Coşkun
Date01 September 2012
DOI10.1177/1035719X1201200202
Subject MatterArticle
4Evaluation Journal of Australasia, Vol. 12, No. 2, 2012
Using root cause analysis for evaluating
program improvement
A common evaluation purpose is to determine whether a
policy or program was implemented as intended: referred
to as formative evaluation, process evaluation, or evaluating
program improvement. A well-designed formative evaluation is
important in: detecting program drift; providing timely feedback
to program sta to make cost-saving mid-course corrections;
reassuring the sponsor that quality assurance measures are
implemented to protect investments; and interpreting impact/
outcome evaluation. A formative evaluation should not just
gather data on deviations from an anticipated course of action,
but provide recommendations for improvement. Current
methods for program improvement vary in their ability to
solicit targeted recommendations. Root cause analysis (RCA)
is a well-established, robust methodology used in a variety of
disciplines. RCA has been primarily used by evaluators operating
from a theory-driven orientation to evaluate the merit and
worth of a program or policy. Surprisingly, a review of the
literature suggests that RCA’s utility as a program improvement
tool has remained largely unrecognised in evaluation. This
article illustrates the application of RCA in evaluating program
improvement. The conditions under which RCA might be
preferred over other formative evaluation methods are
discussed.
Introduction
There are many purposes of program evaluation, including evaluating oversight and
compliance, merit and worth, and program improvement (Mark, Henry & Julnes
2000). The focus of this article is on the methods used for evaluating program
improvement. Formative evaluation, process evaluation, quality assurance, and
program improvement are all synonymous terms1 in the evaluation literature for
determining the extent to which a program or policy was delivered with fidelity.
Assessing program fidelity is important for two main reasons. First, results of
an outcome evaluation can only be clearly interpreted if it is first established that
the program was delivered with fidelity. If the program was not delivered as it was
originally intended, then it is impossible to determine whether the failure to observe
changes in outcomes was due to the design of the intervention or simply because
the program was not executed correctly (Chen 1990; Mills & Ragan 2000). Second,
assessing fidelity helps to detect program drift (Bond 1991). Detecting drift early
on can result in significant cost savings and/or the identification of alternative
implementation strategies (Reijers & Mansar 2005).
Program improvements can be: minor changes, such as the editing of an intake
form or other paperwork; moderate changes, such as training and sta development
Rebekah Coşkun
Abigail Akande
Ralph Renger
Rebekah Coşkun (top left) is a DrPH student
at the Mel and Enid Zuckerman College of
Public Health at the University of Arizona,
Tucson, Arizona.
Email <bekahc@email.arizona.edu>
Abigail Akande (top right) is a PhD
Candidate in Rehabilitation in the College
of Educationat the University of Arizona,
Tucson, Arizona.
Email: <aakande@email.arizona.edu>
Ralph Renger (bottom) is a Professor at the
Mel and Enid Zuckerman College of Public
Health at the University of Arizona, Tucson,
Arizona. Email: <renger@u.arizona.edu>
Evaluation Journal of Australasia, Vol. 12, No. 2, 2012, pp. 4–14
REFEREED ARTICLE
EJA_12_2.indb 4 15/01/13 10:01 AM

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT