Implementation Science. Understanding and finding solutions to variation in program implementation

Published date06 February 2017
Date06 February 2017
AuthorLee E. Nordstrum,Paul G. LeMahieu,Elaine Berrena
Subject MatterEducation,Curriculum, instruction & assessment,Educational evaluation/assessment
Implementation Science
Understanding and nding solutions to
variation in program implementation
Lee E. Nordstrum
RTI International, Edina, Minnesota, USA
Paul G. LeMahieu
Carnegie Foundation for the Advancement of Teaching, Stanford,
California, USA, and
Elaine Berrena
Bennett Pierce Prevention Research Center, Pennsylvania State University,
University Park, Pennsylvania, USA
Purpose This paper is one of seven in this volume elaborating upon different approaches to quality
improvement in education. This paper aims to delineate a methodology called Implementation Science,
focusing on methods to enhance the reach, adoption, use and maintenance of innovations and discoveries in
diverse education contexts.
Design/methodology/approach The paper presents the origins, theoretical foundations, core
principles and a case study showing an application of Implementation Science in education, namely, in
promoting school–community– university partnerships to enhance resilience (PROSPER).
Findings Implementation Science is concerned with understanding and nding solutions to the causes of
variation in a program’s outcomes relating to its implementation. The core phases are: initial considerations
about the host context; creating an implementation structure; sustaining the structure during implementation;
and improving future applications.
Originality/value Few theoretical treatments and demonstration cases are currently available on
commonly used models of quality improvement in other elds that might have potential value in improving
education systems internationally. This paper lls this gap by elucidating one promising approach. The paper
also derives value, as it permits a comparison of the Implementation Science approach with other quality
improvement approaches treated in this volume.
Keywords Quality improvement, Implementation Science
Paper type Research paper
Brief history of Implementation Science
Many experts suggest that Implementation Science arose in the eld of healthcare in
response to a persistent and documented form of service failure (Durlak and Dupre, 2008;
Meyers et al., 2012;Kelly, 2013). Promising and empirically tested interventions and
programs were not delivering expected results or showing a demonstrable impact on desired
outcomes. Even when they did, failures of transferability (i.e. failure to get interventions to
work in different contexts) brought an increasing concern about the complex nature of the
links between existing scientic evidence on programs and their actual application (Kelly,
In the eld of healthcare, concerns arose as early as the mid-1940s, when evidence began
to accumulate that interventions rolled out in clinical settings did not produce the outcomes
The current issue and full text archive of this journal is available on Emerald Insight at:
QualityAssurance in Education
Vol.25 No. 1, 2017
©Emerald Publishing Limited
DOI 10.1108/QAE-12-2016-0080
promised through empirical rounds of testing in controlled settings (Kelly, 2013). Initially,
inquiries focused on why these interventions and programs were not implemented
effectively and with delity. In the 1960s and 1970s, researchers also found that the design
and focus of policy had little to do with the successful implementation of programs, even
when the policy in question prescribed “empirically tested” programs (Pressman and
Wildawsky, 1984). Glasgow et al. (2012, p. 1,274) assert: “Despite demonstrable benets of
many new medical discoveries, we have done a surprisingly poor job of putting research
ndings into practice”. The authors make the point that the discovery of new and improved
interventions is important; but to realize the benets of these interventions, greater attention
needs to be paid to dissemination and implementation to enhance the reach, adoption, use
and maintenance of these new discoveries.
There is a growing body of literature asserting that the nature of implementation
processes actually inuences desired outcomes (Meyers et al., 2012;Kelly, 2012). Indeed,
researchers have found a powerful link among the behaviors, beliefs and values of
practitioners involved in implemented programs and the outcomes of that implementation
(Aarons et al., 2012). Practitioners should not carry sole responsibility for the act of
implementing tested interventions; rather, accountability for the quality of program
implementation should also extend to developers and researchers (Meyers et al., 2012).
Moreover, the role of intermediaries is emerging as a major requirement to ensure
high-quality and sustainable implementation.
Despite these ndings, disciplined attention to program implementation remains, for the
most part, an optional consideration of the scientic enterprise. Practitioners’ behaviors and
beliefs, contextual variables and implementation delity, among others, are not routine
considerations when striving for program effectiveness. Implementation Science, then, is a
product both of the increasing realization that the characteristics and dynamics of
implementation matter greatly for program effectiveness, and of the sobering realization that
most efforts overlook these aspects of programs. In 2006, the Implementation Science Journal
was launched to provide a scientically rigorous platform for the discussion of these very
issues. Today, platforms and networks are emerging with the objectives of encouraging
understanding and application of the principles of Implementation Science.
Brief history of Implementation Science in education
While Implementation Science has a relatively short history in the eld of education (the rst
Handbook of Implementation Science for Psychology in Education, edited by Kelly and
Perkins, was published in 2012), researchers across various contexts have highlighted many
of the same program implementation issues as being of key importance. For example, in
healthcare, a body of literature has accumulated over the course of decades, suggesting that
the implementation characteristics of educational interventions and programs (e.g. the
individuals who implement, their beliefs about the program and themselves, the context in
which it is implemented) hold a great deal of inuence over program outcomes.
The well-known Change Agent Study, conducted by the RAND Corporation (Berman and
McLaughlin, 1974,1976), was a major turning point in educational research, orienting the
eld’s attention toward understanding the process of implementation. The report series
considered a number of federally funded programs (“change agents”) to determine which of
these supported educational change, especially within the instructional core of classrooms
(the work in which schools and teachers engage), and which environmental factors in turn
affected change agent programs. The focus of the reports was not evaluative, but rather
constituted an attempt to describe the changes that occur during a program, how and why
they occur and what impact this has on the operations of educational organizations.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT