Editorial: A tale of one programme and two evaluations

Date01 August 2007
DOIhttps://doi.org/10.1108/17466660200700011
Pages2-3
Published date01 August 2007
AuthorNick Axford,Michael Little
Subject MatterEducation,Health & social care,Sociology
2
1Dartington Social
Research Unit, UK
2Dartington Social
Research Unit, UK,
and Chapin Hall
Center for Children,
University of
Chicago, US
Journal of Childrens Services
Volume 2 Issue 2 August 2007
© Pavilion Journals (Brighton) Ltd
This is a tale of one programme and two evaluations.
One evaluation left the scientists struggling to say
anything concrete about impact. Certain parts of the
press seized on this and complained that public
money had been squandered. They wanted the
programme to be closed down. Government
politicians were left flummoxed and clutching at
straws. They had trumpeted the programme’s
success but now they didn’t know if it worked. But
they had to maintain that it did and the scientists
were wrong, or that it would work in the future even
though it hadn’t yet. The other (lesser known)
evaluation produced clear findings about gains for
the children and families receiving the programme.
These results emboldened policy-makers and
prompted them to roll it out more widely, so that
more people would benefit. These stories are told in
two recent articles (Rutter, 2006; Hutchings et al,
2007). Why the disparity?
Sure Start was established in the UK in 1997 to
reduce child poverty and social exclusion. £300
million was invested in delivery in the first three
years. There was no prescription about what to
provide, with flexibility for individual local service
providers encouraged. Targets were specified but
methods of meeting them were discretionary. There
was no curriculum; indeed, Government advised that
the interventions should not be ‘manualised’.
Further, and against the advice of most of the
research advisors, it was decided not to allow
evaluation by random allocation. The resulting
evaluation design in the national evaluation was
widely regarded as being the best available given the
constraints and as having been executed with
scrupulous care. Data were collected from a range of
sources, including 16,502 families in Sure Start local
programme areas (SSLP) and 2,610 families in
comparison ‘Sure Start to be’ areas.
The study has now generated numerous very
important and helpful findings on a range of subjects
concerning children’s services, some of which are
referred to in Teresa Smith’s article in this edition1.
However, the eminent child and adolescent psychiatrist
Professor Sir Michael Rutter describes the effects
identified after three years as ‘meagre’ and
‘disappointingly slight’. The evaluators commented
that, ‘both beneficial and adverse effects of SSLPs on
children were detected, though these were restricted
almost entirely to 36-month olds and varied across
subpopulations. Once again these effects were limited’
(NESS, 2005). Scientists are left scratching their heads.
Is this result because three years is not long enough for
benefits to show? Or is it because of the research
design? Have the methods employed missed Sure
Start’s true impact or, perhaps, are they incapable of
detecting impact? The evaluators sound a cautious
note, pointing out that these are early results based on
cross-sectional not longitudinal data, and that the
SSLPs studied were ‘perhaps not even entirely
“bedded down” and therefore not fully developed’
(NESS, 2005). The jury is out but Rutter’s view is that
‘not too much hope should be placed on the possibility
of definitive results later’ (Rutter, 2006: 138).
Moreover, the uniqueness of each individual project
means that arguably it is impossible to say if Sure Start
‘works’. As Rutter puts it, ‘there is no such thing as
Sure Start’ (Rutter, 2006: 138). On the positive side,
the variation allows some valuable insight into the
features of services that foster success (see the article
by Anning and colleagues in the next edition).
Now to the second evaluation tale, from Wales. As
Sure Start programmes became established there,
managers sought the advice of a local researcher and
practitioner with a track record of work on parenting
and conduct disorder (see the article by Judy Hutchings
and her colleagues in this edition). She recommended
Editorial
A tale of one programme and
two evaluations
Nick Axford1and Michael Little2

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT