Lessons learned in evaluating system interdependencies using qualitative methods

Date01 June 2022
Published date01 June 2022
AuthorRalph Renger,Eric Souvannasacd,Richard N. Van Eck,Marc Basson,Heather Engblom
DOI10.1177/1035719X211066961
Subject MatterPractice Articles
Practice Article
Evaluation Journal of Australasia
2022, Vol. 22(2) 108125
© The Author(s) 2022
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1035719X211066961
journals.sagepub.com/home/evj
Lessons learned in evaluating
system interdependencies
using qualitative methods
Eric Souvannasacd, Heather Engblom and
Richard N. Van Eck
The University of North Dakota, School of Medicine and Health Sciences, Grand Forks, ND, USA
Ralph Renger
Just Evaluation Services, LLC, AZ, USA
Marc Basson
The University of North Dakota, School of Medicine and Health Sciences, Grand Forks, ND, USA
Abstract
This article shares lessons learned while evaluating the system interdependencies for a
clinical and translational research centre (CTR). It explores the methodological
challenge of discussing system concepts (e.g., cascading failures, feedback loops, and
ref‌lex arcs) with layperson participants during evaluation interviews. The article
discusses the iterative process of moving from structured interview approaches in
favor of an open narrative approach for data collection and lessons learned.
Keywords
evaluation, interview methods, qualitative methods, system interdependencies,
systems concepts, systems evaluation
Corresponding author:
Eric Souvannasacd, School of Medicine and Health Sciences, 1301 N. Columbia Road, The University of
North Dakota, Grand Forks, 58202-9037, ND, United States.
Email: eric.souvannasacd@und.edu
Introduction and overview
Evaluating systems is recognised as a high priority area for evaluation (Patton, 2013).
There are many def‌initions of systems and systems interdependencies (Monat &
Gannon, 2015). While there is no universal def‌inition of system interdependencies,
we use the American Evaluation Associationsdef‌inition, which is a set of interrelated
elements that interact to achieve an inherent or ascribed purpose(AEA, 2018, p. 6).
This def‌inition works best for our evaluation of system interdependencies and is
consistent with other literature that attempts to def‌ine systems and evaluating systems
(Arnold & Wade, 2015;Cabrera & Trochim, 2006;Capra & Luisi, 2014;Checkland,
1999;Hummelbrunner, 2011;Meadows, 2008;Puranam et al., 2012;Renger, 2015).
In evaluation practice, interdependence looks like information exchange (Davis &
Stroink, 2016), relationships between system parts (Ison, 2008), and places where there
are seams (Jackson et al., 2012) or handoffs. One way to understand where these
handoffs occur is to map out the system (Renger, 2015). Some refer to this map as a
standard operating procedure (Checkland, 2000;Renger et al., 2020). Standard op-
erating procedures (SOP) identify where expected handoffs should occur and the
process entailed for successful transitions. Pinpointing where the system interde-
pendencies occur is an essential f‌irst step to understanding what to evaluate (Davidson,
2005;Renger, 2015).
The purpose of this article is to share our lessons learned in evaluating the system
interdependencies of the National Institutes of Health-funded Dakota Cancer Col-
laborative on Translational Activity (DaCCoTA). The DaCCoTA focuses on de-
creasing cancer rates and cancer effects within North and South Dakota. The
DaCCoTA system supports researchers with technical assistance, professional de-
velopment, and funding for cross-disciplinary teams to mitigate cancer and even-
tually obtain external funding to continue their work. The DaCCoTAsystem consists
of seven cores, including: (a) Pilots Projects Program, (b) Administration, (c)
Community Engagement and Outreach, (d) Professional Development, (e) Biosta-
tistics, Epidemiology, and Research Design, (f) Clinical Research Resources and
Facilities, and (g) Tracking and Evaluation. For the system to function effectively,
these cores must synchronise their specif‌ic processes to support each project through
the sequence in a useful and effective manner.
Our challenge was to create an evaluation strategy that both captured the intricacies
of system interdependenciesthe seams within and between coresand enabled our
stakeholders (staff, applicants, and grantees) to share information in a meaningful way
(McCormack et al., 2016). We initially intended to collect information solely through
structured interviewing. However, during post-interview internal debriefs, our eval-
uation team perceived dissonance within some interview interactions that we attributed
primarily to an imperfect alignment between systems nomenclature and stakeholder
experience. We engaged in an iterative process of ref‌lection (McKenney & Reeves,
2014) and ref‌inement of our interview approach, moving from a prescriptive closed
questioning approach that emphasised systems concepts and terminology to a more
Souvannasacd et al. 109

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT