Editorial

DOIhttps://doi.org/10.1108/17465729200400007
Published date01 June 2004
Pages2-5
Date01 June 2004
AuthorLynne Friedli
Subject MatterHealth & social care
journal of mental health promotion volume 3issue 2 june 2004 © Pavilion Publishing (Brighton) Ltd
Editorial
Current debates about evidence of effectiveness – who
defines what works and what measures are they using?
– are growing both more complex and more passionate.
And there are encouraging signs that colleagues in
mental health promotion are increasingly willing to
enter the fray. This is good news, given that the
traditional position has been to stand shivering on the
sidelines, reluctant to swim in a pool full of RCTs,
systematic reviews and other methodological sharks.
In practice, mental health promotion is far from
alone in finding of limited value traditional approaches
to assessing and synthesising evidence (Kelly et al,
2004). A plethora of studies demonstrate that
systematic reviews of effectiveness strip interventions of
meaningful context; public health interventions in
particular tend to be complex, programmatic and
context dependent. As Rychetnik et al (2002) have
argued, the evidence for their effectiveness must be
sufficiently comprehensive to encompass that
complexity. We need to be able to distinguish between
failure of the intervention concept or theory and bad
delivery (failure of implementation). Study design alone
is an inadequate marker of evidence quality in public
health (Rychetnik et al, 2002), and quality is not
confined to experimental methods. Reeves et al (2001),
for example, found that observational studies of high
quality yielded similar evidence to that produced by
RCTs. Across the board, what constitutes quality
evidence is ever more contested, while the calls for
evidence-based practice (EBP) grow more strident.
In any event, merely disseminating the evidence
base appears to have little or no impact on practice
(NHS Centre for Reviews, 1999; Nutley et al, 2002).
The non-adoption of evidence-based practice, despite
their expert knowledge, is prevalent among clinicians,
health promoters and, of course, policy makers, who
are much more likely to be implementing what David
Marks has called OBP – opinion-based practice (Marks
2002; Majumdar et al, 2002). Evidence-based policy is
hard to find because evidence is only one small element
among the range of factors influencing the political
process, both locally and nationally (McIntyre et al,
2001). Current exhortations to the public to take more
exercise, in an attempt to reduce obesity levels, are a
case in point. They are based on two assumptions well
known to be false: that informing people about risks
and remedies will produce behaviour change, and that
individual decision-making is a key determinant of
public health.
Essentially there are two problems: the nature of the
evidence base itself and getting evidence into practice.
In a period of unprecedented activity and opportunity
for mental health promotion across the UK (Friedli,
2004), it is of crucial importance that investment in
mental health promotion programmes is matched by
new thinking about both evidence and practice. The
English Health Development Agency has been at the
forefront of attempts to develop methodologies for
assembling and assessing different types of evidence in a
systematic way. In particular, they are aiming to address
the problem of combining evidence from systematic
reviews with evidence from narrative and other kinds of
review, from different research traditions and from
practitioner knowledge and expertise (Dixon-Woods et
al, 2004; Kelly et al, 2004). In practice, achieving this
goal has proved elusive. Systematic reviews continue to
constitute the mainstay of HDA evidence briefings –
one on mental health promotion is expected to be
published shortly – notwithstanding the well-rehearsed
limitations of using systematic reviews to draw
conclusions about the effectiveness of mental health
promotion interventions (Health Development
Agency/mentality, in press). As Kelly and colleagues
observe, reviews provide scientifically plausible
frameworks for intervention, rather than guides to
detailed action at a local level (Kelly et al, 2004).
More broadly, the HDA is attempting to establish a
systematic approach to the collection and review of
effective practice (www.hda.nhs.uk). The aim is to
develop a national standard for reviewing practice that
is seldom reported in scientific journals and unlikely to
make its way into current ‘evidence of effectiveness’
guidelines. This is a welcome and significant
development. Current collections of ‘good practice’ that
attempt to draw lessons from practitioners are generally
deeply flawed because they are rarely collated in any
2
Lynne Friedli
Editor

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT