Finding common ground: Centralizing responsiveness within a multisite initiative

AuthorRobyn Thomas Pitts
Date01 June 2020
Published date01 June 2020
DOI10.1177/1035719X20921562
Subject MatterPraxis
/tmp/tmp-17KtvhTCTBnwiL/input 921562EVJ0010.1177/1035719X20921562Evaluation Journal of AustralasiaThomas Pitts
research-article2020
Praxis
Evaluation Journal of Australasia
2020, Vol. 20(2) 95 –102
Finding common ground:
© The Author(s) 2020
Article reuse guidelines:
Centralizing responsiveness
sagepub.com/journals-permissions
https://doi.org/10.1177/1035719X20921562
DOI: 10.1177/1035719X20921562
journals.sagepub.com/home/evj
within a multisite initiative
Robyn Thomas Pitts
University of Denver, USA
Abstract
In this reflective analysis, I describe the conditions that led the leaders of a
multisite initiative to adapt their program model towards a framing that centralized
responsiveness as an organizing value. After reflecting on how this shift affected
the initial evaluation plan, I provide a synopsis of how we went about revising our
evaluation strategy. The reimagined approach centred on eliciting and characterizing
various site-specific aspects of culture and context that may influence outcomes
attainment across the multisite initiative. This approach enabled comparative analysis
of the various innovative, place-based expressions of the initiative across its sites.
Reframing the evaluation strategy also enabled future comparative mixed methods
study of maximal variation cases and qualitative comparative analysis of specific
conditions related to outcomes attainment. Findings suggest that evaluators would
benefit from embracing adaptive change in programs and expecting heterogeneity in
multisite initiatives.
Keywords
context, culture, evaluator education, multisite initiatives, responsiveness, values
In this praxis note, I reflect on redesigning an evaluation strategy that centralized
responsiveness as an organizing value within a multisite initiative. As a form of adap-
tive change (Heifetz et al., 2009), leaders elected to modify the general logic of the
initiative during the first year of implementation. After describing these shifts and how
Corresponding author:
Robyn Thomas Pitts, Research Methods and Statistics, Morgridge College of Education, University of
Denver,1999 East Evans Avenue, Denver, CO 80208, USA.
Email: robyn.thomaspitts@du.edu

96
Evaluation Journal of Australasia 20(2)
we revised our evaluation strategy, I present how the framework of values we devel-
oped facilitated our evaluation. Finally, I reflect on our lessons learned and provide
practical implications regarding adaptive change and expecting heterogeneity (Conner,
1985) in evaluation.
Background
We began working with leaders of a multisite educational initiative in late 2017. Its
overarching goal was to provide resources (e.g., staff members, curricula and training)
to various partner sites with the goal of increasing learners’ knowledge of and engage-
ment in civic learning. Sites consisted of primary/secondary schools and enrichment
programs housed within community organizations (e.g., museums) that served chil-
dren living in low-income and/or rural communities. The number of sites varied across
years, ranging from 8 to 10. Evidence of the need for the initiative had been developed
as part of a multiyear effort to explore the localized challenges and opportunities of
civic education. A logic model had been developed by initiative leaders as part of the
grant-writing process. It depicted how newly hired staff members would be trained to
teach one of three evidence-based curricula from similarly focused national programs
(i.e., staff members would facilitate one of three treatment conditions). The initiative
was funded by a federal agency.
Our work to generate an evaluation strategy began after the initiative was devel-
oped, funded and underway. We conducted this evaluation as a team that consists of an
evaluator educator (i.e., an evaluator working in academia) and four graduate students
studying research methods and statistics. Through collaboration with initiative leaders
across a handful of introductory meetings, we crafted an outcomes-based evaluation.
The primary goal of the evaluation was to determine the extent to which the initiative
achieved its target outcomes of building capacity for and increasing civic engagement
and learning using pre-established measures. Secondary goals included refining the
existing program theory (Funnell & Rogers, 2011) and analysing curriculum imple-
mentation at each site through document review, interviews and observations. Findings
related to the secondary goals would also help to ensure that all participants had been
provided with an adequate opportunity to learn (Moss et al., 2008).
As a group of evaluation teams that exists within a university-based collaboratory
(i.e., collaborative learning lab), our work is undertaken through graduate assistant-
ships and coursework. Teams are required to design and implement evaluations that
identify and/or build evidence sequentially along the five domains of evaluation ques-
tions and methods (Rossi et al., 2018). With structured oversight, graduate students
justify their use of various heuristics that frame evaluative practice including Scriven’s
(2019) key...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT