Student feedback: a learning and teaching performance indicator

Published date07 September 2015
Pages410-428
DOIhttps://doi.org/10.1108/QAE-10-2013-0042
Date07 September 2015
AuthorShelley Kinash,Vishen Naidu,Diana Knight,Madelaine-Marie Judd,Chenicheri Sid Nair,Sara Booth,Julie Fleming,Elizabeth Santhanam,Beatrice Tucker,Marian Tulloch
Subject MatterEducation,Curriculum, instruction & assessment,Educational evaluation/assessment
Student feedback: a learning and
teaching performance indicator
Shelley Kinash, Vishen Naidu, Diana Knight,
Madelaine-Marie Judd, Chenicheri Sid Nair, Sara Booth,
Julie Fleming, Elizabeth Santhanam,
Beatrice Tucker and Marian Tulloch
(Author afliations can be found at the end of the article)
Abstract
Purpose The paper aims to disseminate solutions to common problems in student evaluation
processes. It proposes that student evaluation can be applied to quality assurance and improving
learning and teaching. The paper presents solutions in the areas of: presenting outcomes as
performance indicators, constructing appropriate surveys, improving response rates, reporting student
feedback to students and student engagement as a feature of university quality assurance.
Design/methodology/approach – The research approach of this paper is comparative case study,
allowing in-depth exploration of multiple perspectives and practices at seven Australian universities.
Process and outcome data were rigorously collected, analysed, compared and contrasted.
Findings – The paper provides empirical evidence for student evaluation as an instrument of learning
and teaching data analysis for quality improvement. It suggests that collecting data about student
engagement and the student experience will yield more useful data about student learning.
Furthermore, ndings indicate that students benet from more authentic inclusion in the evaluation
process and outcomes.
Research limitations/implications Because of the chosen research approach, the research
results may lack generalisability. Therefore, researchers are encouraged to test the proposed
propositions further and apply to their own university contexts.
Practical implications – The paper includes recommendations at the institution- and sector-wide
levels to effectively use student evaluation as a university performance indicator and as a tool of change.
Originality/value – This paper fulls an identied need to examine student evaluation processes
across institutions and focuses on the role of student evaluation in quality assurance.
Keywords Surveys, Quality assurance, Performance indicators, Learning and teaching,
Student evaluation of teaching, Student feedback
Paper type Research paper
Introduction
Quantiable performance indicators are important at university. They are explicit
descriptions of evidence against which quality is measured. In the contemporary
university, performance indicators are a key factor and have far-reaching impact
through global rankings of higher education institutions (Breakwell and Tytherleigh,
2010;Guthrie and Neumann, 2007;Kettunen, 2010). Universities increasingly operate as
businesses with a focus on marketing and commodities. The emphasis is on bottom
lines, counting graduates and measuring input and output. As university performance
indicators were being articulated, research output was readily amenable to
quantication (Hansson, 2010). For example, academics report number of publications,
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0968-4883.htm
QAE
23,4
410
Received 9 October 2013
Revised 3 June 2014
Accepted 7 April 2015
QualityAssurance in Education
Vol.23 No. 4, 2015
pp.410-428
©Emerald Group Publishing Limited
0968-4883
DOI 10.1108/QAE-10-2013-0042
impact ratings and grant income (Low Hui et al., 2013). Beyond research output, higher
education institutions use performance indicators to determine, measure, report,
evaluate and compare contribution to student learning, engagement and the overall
student experience (Crosling et al., 2009;Kerr and Kulski, 2009;Nair et al., 2010;
Robinson, 2004;Sultan and Wong, 2010). Key performance indicators of such factors as
student entry, exit and transition, social equity, efciency and graduate employability,
measure and report on academic functions of universities (Breakwell and Tytherleigh,
2010;Guthrie and Neumann, 2007).
One of the systems that universities use internally to gather data about learning and
teaching is student evaluation of courses and teaching (SECT). SECT is a process
whereby universities administer surveys to students for their feedback relating to the
learning and teaching environments and includes such matters as whether their
expectations were met and whether assessment was perceived as fair and timely
(Kinash et al., 2013;Shah and Nair, 2012). Universities gather the evaluation feedback
and data for quality assurance and improvement of learning and teaching (Chung Sea
Law, 2010;Nair et al., 2010;Oliver et al., 2008). Whereas the justication of SECT data is
improvement of higher education, universities are largely using SECT to evaluate
teacher performance (Nair et al., 2010). The purpose for collecting SECT feedback does
not align with its justication. If the purpose of SECT data collection is improvement of
higher education, the SECT design, approach and process of collecting and using SECT
data should provide institutions with “good-quality, actionable data” (Kuh et al., 2011,
p. 15). In spite of the alignment gap, education scholars and university administrators
recognise SECT as a key system with strong, but largely unfullled, potential to achieve
its purpose of improving higher education (Barber et al., 2009;Shah and Nair, 2012).
Chung Sea Law (2010) articulated a shift from institutional aspects to student aspects of
the quality issues and from accountability-led views to improvement-led views. These
shifts and innovative approaches to SECT, however, remain positioned as in-house
university processes and have not been developed as a rigorous learning and teaching
performance indicator sector-wide (Zakka, 2009). The process of administering SECT,
that is the survey questions, administration, timing, reporting and action plan, vary
widely from institution to institution (e.g. Bond University – Knight et al., 2012; Deakin
University – Palmer and Holt, 2012; RMIT – Barber et al., 2009; Curtin University –
Tucker, 2013; UWA/Adelaide – Santhanam et al., 2001).
This paper presents proles of seven Australian universities’ SECT processes,
which are successfully using student evaluation to measure student course
engagement and learning development. In other words, the inquiry asks the
question: What are the SECT processes of improvement-led approaches for
performance indicators at these universities? The authors comprise a team of
multi-institutional partners on an Australian national project, “Measuring and
improving student course engagement and learning success through online student
evaluation systems”. The project aims to describe and disseminate Australian case
studies of effective systems, approaches and strategies used to measure and
improve student course engagement and learning success through the use of online
student evaluation systems. It asks the questions:
Q1. How can we measure student engagement and learning success using student
evaluation processes?
411
Learning and
teaching
performance
indicator

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT