Sensitive Survey Questions: Measuring Attitudes Regarding Female Genital Cutting Through a List Experiment

AuthorClemens Lutz,Elisabetta De Cao
DOIhttp://doi.org/10.1111/obes.12228
Published date01 October 2018
Date01 October 2018
871
©2018 The Department of Economics, University of Oxford and JohnWiley & Sons Ltd.
OXFORD BULLETIN OF ECONOMICSAND STATISTICS, 80, 5 (2018) 0305–9049
doi: 10.1111/obes.12228
Sensitive Survey Questions: MeasuringAttitudes
Regarding Female Genital Cutting Through a List
Experiment*
Elisabetta De Cao† and Clemens Lutz
Centre for Health Service Economics & Organisation, Gibson Building, 1st floor, Radcliffe
Observatory Quarter, Woodstock Road, OX2 6GG, Oxford, UK
(e-mail: elisabetta.decao@phc.ox.ac.uk)
Department of Innovation Management & Strategy, University of Groningen, Nettelbosje 2,
9747 AE, Groningen, The Netherlands (e-mail: c.h.m.lutz@rug.nl)
Abstract
Potential bias in survey responses is higher if sensitive outcomes are measured.This study
analyses attitudes towards female genital cutting (FGC) in Ethiopia. A list experiment
is designed to elicit truthful answers about FGC support and compares these outcomes
with the answers given to a direct question. Our results confirm that the average bias is
substantial as answers to direct questions underestimate the FGC support by about 10
percentage points. Moreover, our results providesuggestive but not statistically significant
evidence that this bias is more pronounced among uneducated women and womentargeted
by an NGO intervention (not randomly assigned).
I. Introduction
Eliciting honest answers in surveys is challenging, especially when studying sensitive
issues. If asked directly, individuals may falsify or refuse to answer certain questions. The
dependent sensitive variable, therefore, might be affected by a non-random measurement
error that leads to biased results. Self-reported health status and outcomes have been
determined as being affected by underreporting when, for example, they focus on sensitive
topics related to sexual and reproductive health (Schroder, Careyand Vanable,2003; Glynn
et al., 2011). When asking questions about a sensitive issue, differentsur veymethods exist
for coping with the problem of bias in self-reported answers.
JEL Classification numbers: I15; O10; C13; C83.
*For supervising the data collection, we thank IFPRI, Getaw Tadesse and Samson Jemaneh. For comments, we
thank Robert Lensink, Rob Alessie, Carol Propper, Franco Peracchi,Aljar Meester, Bryn Rosenfeld, Viola Angelini,
Andreas Rauch, PetrosMilionis, Mariko Klasing as well as seminar participants at the 2015 RES Women’s Committee
Mentoring Meetings, 2015 RES conference, 2015 CSAE conference, the 2013 IFP conference in AddisAbaba, at the
PEG seminar series at the University of Groningen, the University of Wageningen for useful comments. All errors
are our own. This research has been financed byThe Netherlands Organisation for Scientific Research/Science for
Global Development programmes (NWO/WOTRO), grant number: W 07.72.2011.115.
872 Bulletin
New qualitative solutions have been proposed by Blattman et al. (2016) to study the
direction and magnitude of the survey measurement error in the dependent variable when
evaluating interventions implemented in Liberia to reduce violence and crime. Blattman
et al. (2016) use qualitative techniques to validate survey responses in relation to different
behaviours (theft, drug use, homelessness, gambling and expenditures) and ascertain dif-
ferent results in terms of underreporting depending on the sensitive behaviour that is being
considered.1
Quantitative survey methods include the randomized response technique (RRT)2and
the endorsement experiment.3A third method used in this paper is called list experiment.
The concept of a list experiment, also referred to as an item count or unmatched count
technique, is that, if a sensitive question is asked indirectly, the respondent may reveal a
truthful response. The method presents respondents with a list of items and asks them to
indicate the total number of items with which they agree. The respondents are randomly
divided into either a control or a treatment group. The control group respondents receive a
list of non-sensitive items. The treatment group respondents receive the same list of non-
sensitive items plus one sensitive item.The propor tion of the respondents who agree with
the sensitive item is estimated computing the difference in the mean response between
those two groups. This technique has mainly been used in political science to understand
voters’ attitudes and racial attitudes (e.g. Kuklinski, Cobb and Gilens, 1997; Redlawsk,
Tolbert and Franko, 2010). It has also been utilized to study sexually risky behaviour
(LaBrie and Earleywine, 2000) and abortion (Moseson et al., 2015). More recently, it has
also been applied in economics to study sensitive issues. In micro-finance, for example,
Karlan and Zinman (2012) used a list experiment to understand how people spend their
loan proceeds, showing that direct elicitation underreports the non-enterprise uses of loan
proceeds. In reproductive health, list experiments have been developed to obtain truthful
answers on topics such as condom use, number of sexual partners, unfaithfulness, and
attitude changes with respect to the social acceptability of these behaviours (Chong et al.,
2013; Jamison, Karlan and Raffler, 2013). De Cao et al. (2017) employ a series of list
experiments to study if community conversations contribute to a change in social values,
beliefs, and attitudes regarding harmful traditional practices against women in Ethiopia.
A paper by Coffman, Coffman and Ericson (2013) estimates the magnitude of anti-gay
sentiment showing that it is generally underestimated when a list experiment is used to
elicit truthful answers.
Surprisingly, the aforementioned economic literature considers a difference-in-means
estimator to analyse the list experiment (see, e.g. Karlan and Zinman, 2012; Chong et al.,
1Blattman et al. (2016) randomly selected a subsample of the respondents to validate survey responses.The goal
was for the validators to determine if the respondent had engaged in any of the measured behaviour by meeting
a few times with the individual with the goal of developing a rapport and gain trust. Then, by engaging in casual
conversation, the validators raised indirect questions (bytelling stories or scenarios) about the behaviours.
2The RRT consists of asking the respondent to use a randomization device (dice, coin flip, etc) whoseoutcome is
unknown to the interviewer. By introducing random noise, the RRT guarantees the anonymity, and the respondent
may be more willing to reveal the truth. See Warner (1965) for further details.
3In an endorsement experiment, respondents are randomlyassigned to a treatment g roup and asked to expresstheir
opinion towards a policy endorsed by a specific actor whose support level needs to be measured.These responses
are then compared with those from a control group of respondents that answered an identical question without the
endorsement. See Bullock, Imai and Shapiro (2011) for further details.
©2018 The Department of Economics, University of Oxford and JohnWiley & Sons Ltd

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT