What did it all mean?1

DOI10.1177/1035719X0200200209
AuthorNan Wehipeihana,Elizabeth Barber,Robert Lake,Libby Kalucy,Susan Goff,Richard Elvins
Published date01 December 2002
Date01 December 2002
Subject MatterFinal Plenary Session
35
What did it all
mean?1
Richard Elvins (Victoria)
There doesn’t seem to have been a need at this conference to define evaluation. It seems to
be assumed that we all know what it is and that we subscribe to the systems-based view.
We also don’t seem to have spent much time on defining the evaluator, which has been a
prominent issue in the past. It seems to have been assumed that the evaluator is a trained
specialist. Other modes of delivery have not received much attention.
But this has not detracted from the conference, it has just given it focus, and enabled a
series of lively debates on the role of the evaluator, which for me has been the real highlight
of the conference (together with the coverage of participatory evaluation).
The conflicting roles of the evaluator have been variously described, on the one hand,
as data gatherer, provider of independent information and bystander, and on the other, as
agent of change and influencer. I think these typologies are useful illustrations of the
possible roles of the evaluator, located at the poles of a continuum. But this is all – there is
no fundamental truth. The discussion highlighted to me that the role of the evaluator needs
to be matched to the different environments and cultures within which evaluation is
practised.
This all came down to the issue of the political nature of evaluation, and the notions of
evaluator neutrality and independence (the theme for next year’s conference), and their
effect on objectivity. This led into an extensive debate on values in evaluation. My interest
here was in the contention that objectivity was not necessarily desirable, as I had always
thought, because it allowed the evaluator to introduce her values to an evaluation –
something I must admit I had never considered before.
I must admit to being sceptical about democratic evaluation as there would be very few
cases where an evaluator could realistically pursue the interests of citizens (and the ‘public
good’) as the primary audience. Despite this, I would have thought that, in the interest of
good evaluation, stakeholders should always be encouraged to talk to one another, or at
least, have their ideas brokered by the evaluator.
There were three other themes which attracted my attention:
There didn’t appear to be as much as usual on methodology. Maybe the idea was to
cover the ‘how to’ in the workshops. There was, however, a strong emphasis on
multiple methods which recurred throughout the conference.
The final conference plenary was designed as a review
session to focus on what had been learnt at the
conference. Chaired by conference discussant Patricia
Rogers, a panel drawn from six AES regions contributed
their impressions of the conference and what they saw
as the key issues over the three days. This panel session
was followed by Patricia’s closing address, which
appears on pages 30–34.
final plenary session
What did it all mean?

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT