Design-based implementation research

DOIhttps://doi.org/10.1108/QAE-11-2016-0077
Published date06 February 2017
Pages26-42
Date06 February 2017
AuthorPaul G. LeMahieu,Lee E. Nordstrum,Ashley Seidel Potvin
Subject MatterEducation,Curriculum, instruction & assessment,Educational evaluation/assessment
Design-based
implementation research
Paul G. LeMahieu
Carnegie Foundation for the Advancement of Teaching, Stanford,
California, USA
Lee E. Nordstrum
RTI International, Edina, Minnesota, USA, and
Ashley Seidel Potvin
School of Education, University of Colorado Boulder, Boulder, Colorado, USA
Abstract
Purpose This paper is second of seven in this volume elaborating different approaches to quality
improvement in education. It delineates a methodology called design-based implementation research (DBIR).
The approach used in this paper is aimed at iteratively improving the quality of classroom teaching and
learning practices in dened problem areas through collaborations among researchers, practitioners and other
education stakeholders.
Design/methodology/approach The paper describes the origins of the approach in US education,
along with its foundations, core principles and a case application of DBIR in practice. The case focuses on the
specic problem of teaching science and genetics in primary and secondary schools in a district.
Findings The guiding principles of DBIR are: a focus on persistent problems of classroom educational
practice; iterative and collaborative design and testing of innovations through partnerships between
researchers and practitioners, involving multiple stakeholders’ perspectives; a concern with developing
theory related to both implementation processes and classroom learning outcomes, using systematic inquiry;
and development of the capacity of both researchers and practitioners to sustain changes in educational
systems.
Originality/value Few theoretical treatments and demonstration cases are currently available in US
education that examine common models of quality improvement, particularly DBIR. By engaging
practitioners with researchers in designing, testing and implementing reforms meaningfully, DBIR shows
promise in offering signicant on-the-ground benets. This paper adds value by allowing readers to compare
the DBIR method with the other improvement approaches explicated in this volume.
Keywords Quality improvement, DBIR
Paper type Research paper
Introduction
For decades, educational reformers and researchers have sought to identify educational
programs and classroom interventions that are effective across a wide variety of contexts
and student populations. This continual search for effective educational programs, as
validated by research-based evidence, and the widely differing impacts observed even when
evidence-based “effective” programs are implemented in real school settings, is an apparent
paradox in education (Bryk et al., 2015;Datnow, 2002;Datnow et al., 2002;Lynch et al., 2012;
Spillane et al., 2002). The design-based implementation research (DBIR) approach to quality
improvement arose as a response to this perennial dilemma, showing through its application
that educational research can be carried out in ways that positively affect intended practices
and outcomes in classrooms and schools, through systematically forged partnerships
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0968-4883.htm
QAE
25,1
26
QualityAssurance in Education
Vol.25 No. 1, 2017
pp.26-42
©Emerald Publishing Limited
0968-4883
DOI 10.1108/QAE-11-2016-0077
between researchers and practitioners that are egalitarian and thoughtful (Fishman et al.,
2013;Penuel et al., 2011).
As compared to other quality improvement methods described in this volume, DBIR is
young in education. It does not have literature-supported guidance on specic steps, stages
or processes for adherents to follow. However, the method does specify four guiding
principles that dene it:
(1) A focus on persistent problems of practice in education systems from multiple
stakeholders’ perspectives (e.g. students, teachers, parents, leaders or instructional
aides).
(2) A commitment to iterative and collaborative design of programs or change
interventions, to achieve desired outcomes.
(3) A concern with developing theory, knowledge and practice-based expertise related to
both program implementation (processes) and classroom learning (outcomes)
through systematic inquiry.
(4) A concern with developing organizational capacity for sustaining change
improvements in systems.
Purpose
This article traces the history of DBIR to its use in the eld of education and describes the
four guiding principles within the context of a case example (STEMGenetics). Following the
case example, it describes the ways in which researchers and practitioners collaborate to
identify problems to address with DBIR and negotiate research agendas, develop solutions
through iterative design cycles and enact change through a commitment to the four design
principles. Finally, the article explores the way in which a DBIR approach seeks to spread
knowledge within and across organizational contexts where the method might be
implemented.
Historical roots of design-based implementation research in education
DBIR in education was developed as an approach for fostering organizational change
and quality improvement and is a relatively recent addition to the quality improvement
landscape. The rst formal instantiations of this approach in US education are the
Strategic Education Research Partnership (SERP; detailed in the following section) and
the Learning Technologies in Urban Schools Center (LeTUS), which arose in the early
2000s.
However, DBIR shares common characteristics with a number of theoretical
approaches to educational research and evaluation oriented toward program
development. Five of these, namely, traditions in program evaluation and evaluation
research (Rossi et al., 2003;Fitzpatrick et al., 2016), community-based participatory
research (Whyte, 1991;Reason and Bradbury, 2005;Chevalier and Buckles, 2013),
implementation research (Fixsen et al., 2005), design-based experimentation (Brown,
1992;Cobb et al., 2003) and social design experimentation (Gutiérrez and Vossoughi,
2010) are discussed next:
(1) Program evaluation and evaluation research, as the names suggest, deal with a
family of theoretical models, methods, principles and tools of inquiry for
evaluating interventions or innovations. Evaluations can be performed either
formatively (i.e. in the interest of making programmatic improvements) or
summatively (i.e. to inform judgments about efcacy, effectiveness, efciency or
worth), so as to ameliorate given social or educational problems. There are three
27
Design-based
implementation
research

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT