Demographics and (Equal?) Voice: Assessing Participation in Online Deliberative Sessions

AuthorClaire Abernathy,David MJ Lazer,Anand E Sokhey,Michael A Neblo,William Minozzi,Ryan Kennedy,Kevin M Esterling,Amy Lee
DOI10.1177/0032321719890805
Published date01 February 2021
Date01 February 2021
Subject MatterSpecial Issue Articles
https://doi.org/10.1177/0032321719890805
Political Studies
2021, Vol. 69(1) 66 –88
© The Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0032321719890805
journals.sagepub.com/home/psx
Demographics and (Equal?)
Voice: Assessing Participation
in Online Deliberative Sessions
Ryan Kennedy1, Anand E Sokhey2,
Claire Abernathy3, Kevin M Esterling4,
David MJ Lazer5, Amy Lee6,
William Minozzi7 and Michael A Neblo7
Abstract
Critics of deliberative democracy have worried that deliberation may mirror (or even exacerbate)
inequalities in participation across categories such as gender, race, and age. Accordingly, we
investigate the potential for technology and design to ameliorate these concerns, looking at the
extent to which online deliberative sessions facilitate inclusive participation. In a large study of
online deliberation (over 1600 participants nested in hundreds of online sessions), we examine
differences in the amount and nature of participation across demographic categories, as well as
the effect of forum characteristics on such differences. Though our results are mixed, we read
them with cautious optimism: the online format is not immune to inequalities in participation and
satisfaction, but we do not observe differences across some demographics, and most observed
differences are substantively minor. Moreover, features of online deliberation environments show
promise for addressing some of the problems plaguing in-person designs.
Keywords
communication technology, deliberation, participation, gender, race, age
Accepted: 18 October 2019
Introduction
Theorists have long argued that deliberation has the potential to improve democratic prac-
tice (e.g. Chambers, 1996; Habermas, 1984), and in recent years empirical research on
1Department of Political Science, University of Houston, Houston, TX, USA
2Department of Political Science, University of Colorado, Boulder, CO, USA
3Department of Political Science, Stockton University, Galloway, NJ, USA
4Department of Political Science, University of California, Riverside, CA, USA
5Department of Political Science, Northeastern University, Boston, MA, USA
6Institute for Democratic Engagement and Accountability, The Ohio State University, Columbus, OH, USA
7Department of Political Science, The Ohio State University, Columbus, OH, USA
Corresponding author:
Ryan Kennedy, Department of Political Science, University of Houston, 3551 Cullen Blvd., Room 447 PGH,
Houston, TX 77204-3011, USA.
Email: rkennedy@uh.edu
890805PSX0010.1177/0032321719890805Political StudiesKennedy et al.
research-article2020
Special Issue Article
Kennedy et al. 67
deliberative practice has proliferated (for reviews, see Neblo, 2005, 2015; Thompson,
2008). Deliberation has been credited with producing a number of democratic goods,
including increased political knowledge/learning (e.g. Barabas, 2004; Esterling et al.,
2011), increased tolerance for opposing views (e.g. Gutmann and Thompson, 1996), and
higher levels of faith in the democratic process (e.g. Fishkin, 1995; Neblo et al., 2018).
However, critics of the practice of deliberation have charged that traditionally politi-
cally underrepresented groups may not be heard or treated equally in group deliberative
processes (e.g. Mansbridge, 1983; Sanders, 1997; Sunstein, 2006; Young, 2001). Most of
these authors have focused squarely on how participation in deliberation sessions—in
terms of attendance, quality of engagement, and satisfaction with the experience—suffers
from similar biases seen in other types of political participation (Verba et al., 1997).
Indeed, some recent empirical work has underscored these concerns: Karpowitz,
Mendelberg, and Shaker find significant gender differences in deliberative participation
in a series of small-group experiments (e.g. Karpowitz et al., 2012; Karpowitz and
Mendelberg, 2014); gender composition and group decision rules interact to facilitate
(diminish) female participation.
While these results are focused on one characteristic of participants (gender), they moti-
vate a broader discussion of whether underrepresented groups differ in their in-deliberation
participation. We deeply share these concerns, and in turn argue that such dysfunctions need
not be a necessary part of deliberative practice. Rather, we stress the point that interactions
(within small groups) are heavily conditioned on the design features that create the context
for the interaction. Online platforms can be designed in a way that filters out cues regarding
race, age or gender, and that prevent interruptions while others are speaking. Evidence to
this effect can be found in the Neblo et al. (2018) findings of broad and equal participation
in and satisfaction with carefully designed online town halls.
Leveraging 3 years of data gathered via the Common Ground for Action (CGA) online
deliberation system (https://www.nifi.org/en/common-ground-action), we test whether
the amount and nature of participation in online deliberative sessions varies across demo-
graphic groups, and whether these outcomes themselves depend on session-level charac-
teristics such as the presence of a female group discussion moderator. Our study stands
apart from many small-group studies not only in its (online) mode but also in its size and
features: we analyze over 1600 participants nested in 275 unique deliberative sessions,
each covering one of 20 topics; we include “moderate” facilitators and employ a decision
rule that incorporates aspects of both unanimous rule and majority rule.
Although this study lacks explicit random assignment and experimental control, causal
interpretations of our estimates may still be justified based on appropriate assumptions.
We rely on naturally occurring variation in demographic composition of sessions, and
participants lacked the ability to self-select into sessions based on these characteristics.
That said, sessions also drew samples from different populations, and featured different
topics that were likely to be differentially attractive to participants based on gender, race/
ethnicity, or age (more generally, even things like the day of the week or the time of a
session may have influenced demographic composition). Therefore, our analytic approach
is to rely on random effects models with session-level intercepts, which are modeled with
covariates that include recruitment method. Because these data are observational, causal
inferences rely more heavily on analysis than they would in a randomized controlled trial.
Nevertheless, these data provide the opportunity to focus on behavior in online delibera-
tion at a scale that provides statistical power (and variance on substantive topic), all con-
ducted through a carefully designed, state-of-the-art platform.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT