Using Compulsory Mobility to Identify School Quality and Peer Effects

Published date01 August 2015
Date01 August 2015
©2014 The Department of Economics, University of Oxford and JohnWiley & Sons Ltd.
doi: 10.1111/obes.12076
Using Compulsory Mobility to Identify School Quality
and Peer Effects*
Francis Kramarz, Stephen Machin‡ and Amine Ouazad§
CREST, CEPR, and IZA Boulevard Gabriel P´eri, 92240 Malakoff, Cedex, France
University College London, Centre for Economic Performance, London School of
Economics, and CEPR, Gower Street, London WC1E 6BT, UK (e-mail:
§INSEAD, Boulevard de Constance, 77300 Fontainebleau, France
Education production functions that feature school and student fixed effects are identified
using students’school mobility. However,student mobility is driven by factors like parents’
labour market shocks and divorce. Movers experience large achievement drops, are more
often minority and free meal students, and sort endogenously into peer groups and school
types. We exploit an English institutional feature whereby some students must change
schools between grades 2 and 3. We find no evidence of endogenous sorting of such
compulsory movers across peer groups or school types. Non-compulsory movers bias
school quality estimates downward by as much as 20% of a SD.
I. Introduction
Policymakers, parents and researchers alike have shown considerable interest in the mea-
surement of school quality – that is, the unbiased measurement of a school’s causal effect
on student achievement.1Causal school quality estimates can help in the design of
accountability systems,2funding mechanisms and also help parents choose schools.3
*Wethank audiences of the COST meeting, the Paris School of Economics, CREST, the 2006 IZA prize in labor
economics conference, Boston College, Cornell University, the US National Academyof Sciences and Sciences Po
Paris. John Abowd, Sandra Black, Eric Maurin,Thomas Piketty and Jean-Marc Robin offered helpful suggestions
for improving the manuscript.The authors acknowledge financial and computing support from INSEAD, the Centre
for Economic Performance and CREST-INSEE.
JEL Classification numbers: I21, J00
1The literature dates back at least to Coleman (1966), Summers and Wolfe(1977), and Card and Krueger (1992).
A recent contribution is Dustmann, Puhani and Sch¨onberg (2012).
2Under the US No Child Left Behind Act, states are required to publish adequate yearlyprogress (AYP) data for
schools by racial and gender group as well as for ‘special-needs’ groups. This is a rudimentary way of controlling
for student observables. A failure to make AYP can lead to school closure, so it is implicitly assumed is that AYP
measures partly reflect school quality.
3The English Department for Education publishes league tables with students’ average progress by school that
take account of observable contextual factors, so called contextual value added. This may be not free of potential
time-varying confounders such as family events (Gibbons, 2007) and may not necessarilyreflect school quality.
Using compulsory mobility 567
The main challenge faced in study of this question is to disentangle what part of schools’
average test scores is due to the quality of the school from whatis due to either time-varying
or non-time-varying student characteristics. Controlling for observable non-time-varying
student characteristics is straightforward in a least-squares regression, and when such
characteristics are unobservable, control is usually via the inclusion of student fixedeffects.
It is noteworthy that identifying education production functions that feature school and
student fixed effects relies on student mobility across schools. This is clearly illustrated
by the inability of a single cross section of data to jointly estimate student and school
fixed effects,4and by the impossibility – absent the mobility of students across schools in
longitudinal data – of separately identifying student and school fixed effects.
Hence student mobility across schools is the main source of identification in regressions
involving school quality and student fixed effects. Therefore, estimates of school effects
can be biased if identification relies on the selected subsample of pupils who move because
of family events that affect both their educational achievement and their mobility. In this
paper, we consider a specific education production function, but this point applies also
to a range of different education production functions that control for student and school
As it turns out, there is evidence that student mobility across schools may be correlated
with other events that can affect test scores. Research – including Gibbons (2007) in the
United Kingdom and Burkam, Lee and Dwyer (2009) in the United States – indicates that
mobility is systematically associated with family events such as unemployment, family
break-ups and labour market opportunities. There is also evidence that these family events
have a negativeimpact on test scores (James-Burdumy, 2005; Stevens and Schaller, 2011).
This is likely to cause bias in the estimation of school effects: if, for instance, students who
experience negative familyevents move to worse (resp., better) schools, then the difference
between good and bad schools is likely to be overestimated (resp., underestimated). Hence
there will be an endogenous mobility bias if one cannot control for these time-varying
shocks. Regrettably,large-scale administrative data with student test scores does not usually
include such time-varying observables, which havethe potential to confound the estimates
of school quality.6
These data limitations offer a significant and difficult research challenge to researchers,
and a warning to both parents and policymakers. To try and address this challenge in the
context of this paper (i.e. primary school students in England), we note that children who
start school at age 5 in an infant-only school must change schools between grades 2 and
4Unless one assumes that the effects are uncorrelated, in which case student effects are random and the model can
be estimated as a hierarchical random effects model. Unfortunately, regression estimates suggest that the correlation
between student fixed effects and school effects is significantly non-zero.Also, a random effects model does not
allow for the correlation between observable covariatesand the effects.
5Education production functions either regress student test scores (in levels) on school quality measures, student
effects, peer effects and past test scores (Toddand Wolpin, 2003) or regress student progress on school quality mea-
sures, peer effects, student effects and other controls (Rivkin,Hanushek and Kain, 2005). Random-effects estimation
of education production functions does not rely on student mobility because such estimates can relyon a cross section.
However, random-effects estimation depends on orthogonality conditions, which are rejected in English education
data (and most likely in other settings as well).
6This is the case not only in England but also the case in data from North Carolina (Rothstein, 2010), NewYork
(Rockoff, 2004) and a number of other school districts in the United States. In Europe, Danish and Swedish data
include a wealth of information on parents (unemployment, health, family status), but lack test scores before age 16.
©2014 The Department of Economics, University of Oxford and JohnWiley & Sons Ltd

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT