Making sense of student feedback using text analysis – adapting and expanding a common lexicon

Pages60-69
Published date05 February 2018
Date05 February 2018
DOIhttps://doi.org/10.1108/QAE-11-2016-0062
AuthorElizabeth Santhanam,Bernardine Lynch,Jeffrey Jones
Subject MatterEducation,Curriculum, instruction & assessment,Educational evaluation/assessment
Making sense of student feedback
using text analysis adapting and
expanding a common lexicon
Elizabeth Santhanam,Bernardine Lynch and Jeffrey Jones
Learning and Teaching Centre,
Australian Catholic University, Melbourne, Australia
Abstract
Purpose This paper aims to report the ndings of a study into the automated text analysis of student
feedback commentsto assist in investigating a high volume of qualitativeinformation at various levels in an
Australian university. It includes the drawbacks and advantages of using selected applications and
establishedlexicons. There has been an emphasis on the analysis of the statistical data collectedusing student
surveys of learning and teaching, while the qualitative comments provided by students are often not
systematicallyscrutinised. Student comments are important, as they provide a level of detail and insightthat
are imperativeto quality assurance practices.
Design/methodology/approach The paper outlines the process by which the institution researched,
developed and implementedthe automated analysis of student qualitative comments in surveysof units and
teaching.
Findings The ndings indicated that there are great benets in implementing this automated process,
particularly in the analysis of evaluation data for units with large enrolments. The analysis improved
efciencyin theinterpretation of student comments. However, a degree of human interventionis still required
in creating reportsthat are meaningful and relevant to the context.
Originality/value This paper is unique in its examination of one institutions journey in developing a
process to support academicsstaff in interpreting and understanding student comments provided in surveys
of units and teaching.
Keywords Evaluation, Quality assurance, Surveys, Student feedback, Text analysis,
Qualitative data
Paper type Case study
Introduction
The collection of student feedback undertaken by universities is now a ubiquitous quality
assurance practice. It is generally accepted that despite limitations, surveys of the student
experience provide valuable insightsrelating to learning and teaching practices (Alderman
et al., 2012;Harvey, 2011;Kinash et al.,2015;Palmer, 2012). There has been an emphasis on
the collation of quantitative data as a performance indicator, impelled by the requirements
of government agencies and the increasing marketisation and globalisation of the tertiary
education sector, which has generated increased competition for students (Jones, 2003;
Palmer, 2012). While it is acknowledged that student feedback forms just one aspect of the
information needed to assess learning and teaching quality, it is also recognised that the
student evaluation process can enable an active form of institutional accountability (Crews
and Curtis, 2010).
Concomitantly,there has been an increasing move towards using the studentvoiceas a
means of marketing universities, geared at attracting future students andretaining current
students. This has prompted increasing interest in closing the evaluation feedback loop,
QAE
26,1
60
Received8 November 2017
Revised5 December 2017
Accepted6 December 2017
QualityAssurance in Education
Vol.26 No. 1, 2018
pp. 60-69
© Emerald Publishing Limited
0968-4883
DOI 10.1108/QAE-11-2016-0062
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0968-4883.htm

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT