Evaluation Journal of Australasia

Publisher:
Sage Publications, Inc.
Publication date:
2021-09-06
ISBN:
1035-719X

Latest documents

  • Evaluator competencies and self-assessment: Lessons from the Learn Evaluation Assessment Portal
  • Framing the future: Exploring the values that underpin the Australian Evaluation Society’s Evaluators’ Professional Learning Competencies

    In the last two decades, evaluator competency frameworks have become ubiquitous. Many have been developed by reviewing the literature and engaging in various research processes to ask evaluation practitioners what they do, in order to determine what evaluators should know and be able to do. Due to this practice focused approach, the underlying philosophical and theoretical assumptions of competencies are rarely questioned. In this article, the authors explore this territory by categorising competencies using two taxonomies. Mertens and Wilson’s 2018 Program evaluation theory and practice provides a philosophical framework, and Schwandt & Gates’ 2021 Evaluating and valuing in social research provides a theoretical framework. The authors apply these frameworks to the three evaluation specific domains of the Australian Evaluation Society (AES) Evaluators’ Professional Learning Competency Framework as refined for the Learn Evaluation Assessment Platform in 2020. Findings of this exploratory study suggest that the AES’ conception of evaluator competencies aligns primarily with Mertens and Wilson’s pragmatist/neopragmatist paradigm and Schwandt and Gates’ expanded frame. The authors discuss implications of the results and propose ideas for further improvement and testing of this research approach.

  • Making the Learn Evaluation Assessment Portal: Developing a reciprocal tool for learning and research

    Evaluation is a practice with increasing global demand. However, an understanding of who wants to learn evaluation and their learning needs related to evaluator competencies is relatively unexplored. This article describes a research effort designed to address those needs using an evaluator competency self-assessment. Despite their validity challenges, self-assessments can yield valuable information for a variety of audiences. Respondents and their organisations can use this method to understand evaluation capacity strengths and needs and create professional development plans. For those conducting evaluation capacity building in or for organisations, and those developing formal evaluation education programs, self-assessment results can provide a road map for planning, development and delivery. This article describes the process of developing and piloting the Learn Evaluation Assessment Portal, an evaluator competency self-assessment tool, with the Australian Evaluation Society. The article reflects on the lessons learnt from the development and testing of the tool and will be valuable for a range of stakeholders from practitioners to commissioners and in particular Voluntary Organisations of Professional Evaluators which are committed to developing evaluation capacity and are working towards professionalisation of the field.

  • Moving from guideline to measure to findings: The Australian Evaluation Society and the Learn Evaluation Assessment Portal

    In 2013, the Australasian Evaluation Society (AES) launched the Evaluators’ Professional Learning Competency Framework. In 2020, the AES (now Australian Evaluation Society) partnered with learnevaluation.org to develop an online self-assessment for the AES community. The AES Competencies were explicitly designed to support professional learning, not as any kind of assessment instrument. Therefore, work was needed to update the competencies into a measurable format. The authors report the history of the competencies, developments made before and after pilot testing them in the online Learn Evaluation Assessment Portal (LEAP), and the findings from the data collected over two and a half years of using the LEAP. The AES Competencies Framework remains one of the few competency sets with explicit knowledge and skill statements related to the logic of evaluation.

  • Evaluative attitude in action: Using the Learn Evaluation Assessment Portal in graduate education

    Evaluation students need a sense of how the expertise they bring and the knowledge and skills they are learning relate to the tasks of evaluation and to their own ongoing professional development. Within the Master of Evaluation and the Graduate Certificate in Evaluation at The University of Melbourne’s Assessment and Evaluation Research Centre (formerly the Centre for Program Evaluation), students use an evaluator competencies self-assessment to inform their coursework and practice across three subjects. In this paper, the authors – the coordinator of the subjects and several students – share practices and examples of how the self-assessment added value for their learning. Discussion of the logistics, ethical requirements, and appropriate use given the limitations of self-assessments provide ideas for integrating self-assessment into the design and delivery evaluation capacity building or formal courses to increase ownership of learning for adults who want to expand their evaluation expertise.

  • Using formal theory in evaluation – What is it and how is it used?

    Little information is available about how formal, social science theory has been used in outcome evaluations. This gap exists in debates about what role theory has in evaluation. To help address this, we need to understand variation in formal theory use in real-world practice. This research applied systematic qualitative coding to identify and classify patterns of theory use, followed by qualitative comparative analysis. The terms describing theory are defined and formal theory is differentiated from other kinds of theory, such as program theory. The study found instances of borrowing and repurposing theoretical material in a cohesive sample of 17 outcome evaluation reports covering programs addressing complex social problems, drawn from cross-cultural contexts. Theory was mostly used for post hoc explanation and less often used upfront in framing, design and conduct of evaluation. The concrete approach in the literature of applying formal theory to measure and explain causal pathways to behavioural outcomes was found, as was layered use of a range of theoretical material. This article offers insights that may assist evaluators to undertaken more sophisticated and reflective approaches to using theory. Thinking about ways we use theory could form part of the tool-kit of techniques available to evaluation practice.

  • The institutionalisation of evaluation in Asia-Pacific
  • Evaluator Perspective: A conversation with the Managing Director of the Australian Centre for Evaluation (ACE) – Eleanor Williams
  • How many interviews or focus groups are enough?

    When it comes to qualitative evaluation data, is more always better and what determines value for money? This article proposes two steps for evaluators and those responsible for procuring evaluations to answer the question ‘how many interviews or focus groups are enough?’ Step 1 is to consider the nature of the evaluation question to determine the sampling goal, importance of thematic saturation, and an appropriate sampling strategy. The article provides guidance on how many interviews and focus groups are needed to achieve different levels of thematic saturation based on empirical tests in the published literature. Step 2 is to check the skills of the evaluator, including whether they integrate behavioural science into their discussion guide and analysis to mitigate bias. This will determine – regardless of the number of interviews and focus groups – whether they will be able to generate useful insights for decision-making from the data. The article concludes that it is not sufficient to assess an evaluation plan’s value for money by sample size alone and consideration also must be given to the characteristics of the evaluation design and the skills of the evaluators undertaking the project.

  • Engaging evaluation as a catalyst: Artists and arts managers voices in reshaping the Australian Performing Arts Market

    Australian Performing Arts Market (APAM) is the Australia Council for the Arts key strategic engagement to develop national and international touring opportunities for Australian contemporary performing arts. The Market has been delivered since 1994 in four Australian cities hosted by differing cultural venues. Over a sequence of three iterations from 2014 to 2018 the APAM presenting venue, Brisbane Powerhouse, made a commitment to reshape the delivery of the Market. Positioning the Market as a cultural intermediary, this discussion outlines how the Market engaged with evaluative data to transform from a transactional model to a relational focus by foregrounding the experience of artists, companies, and art managers. The findings show how planned and ongoing stakeholder engagement influenced the focus of the Market, and the significance of artist and arts manager voices were in transforming the Market experience away from a focus on buying and selling to collaboration and exchange.

Featured documents