Using the right design to get the ‘wrong’ answer? Results of a random assignment evaluation of a volunteer tutoring programme

Published date12 April 2008
DOIhttps://doi.org/10.1108/17466660200800008
Pages4-16
Date12 April 2008
AuthorGary Ritter,Rebecca Maynard
Subject MatterEducation,Health & social care,Sociology
4
1Endowed Chair in
Education Policy,
University of
Arkansas
2University Trustee
Professor of
Education and Social
Policy, University of
Pennsylvania
Journal of Childrens Services
Volume 3 Issue 2 September 2008
© Pavilion Journals (Brighton) Ltd
Abstract
Academically focused tutoring programmes for young children have been promoted widely in the US in
various forms as promising strategies for improving academic performance, particularly in reading and
mathematics. A body of evidence shows the benefits of tutoring provided by certified, paid professionals;
however, the evidence is less clear for tutoring programmes staffed by adult volunteers or college
students. In this article, we describe a relatively large-scale university-based programme that creates
tutoring partnerships between college-aged volunteers and students from surrounding elementary
schools. We used a randomised trial to evaluate the effectiveness of this programme for 196 students
from 11 elementary schools over one school year, focusing on academic grades and standardised test
scores, confidence in academic ability, motivation and school attendance. We discuss the null findings in
order to inform the conditions under which student support programmes can be successful.
Key words
RCTs; design; evaluation; volunteer tutoring programme
Introduction
This special edition of the Journal of Children's
Services revolves around impact evaluation,
specifically randomised controlled trials (RCTs). It is
well documented here and elsewhere that children's
service practitioners – such as educators and social
workers – have been reluctant historically to support
such studies of children's programmes. This divide
between research and practice has certainly been
present in the US but it is even starker in Europe. For
example, when Macdonald and Sheldon (1992)
reviewed effectiveness studies in the field of social
work, they found only 23 experimental studies from
the prior 12 years. A total of 75% of those studies were
conducted in the US, while only 10% were from the UK.
In recent years, organisations such as the EPPI-
Centre (Evidence for Policy and Practice Information
and Coordinating Centre) in the UK, the What Works
Clearinghouse in the US and the international
Campbell Collaboration have emerged to improve the
evidence base for practice in various areas of social
policy1. As researchers in these organisations have
worked to synthesise existing research in the relevant
areas, we have been disappointed to find that, in
areas related to children's services such as education
and social work, there is very little good evidence to
form the basis of research syntheses. The implication
of this trend is that community organisations often
must make decisions about interventions on the basis
of intuition rather than evidence.
One reason for this lack of evidence in the field of
education is the reluctance of school officials to
embrace rigorous research designs, such as
randomised field trials. There are numerous practical
barriers to assigning subsets of students to one type
Using the right design to get the
‘wrong’ answer? Results of a
random assignment evaluation of
a volunteer tutoring programme
Gary W Ritter1and Rebecca A Maynard2

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT