Combining experimental and inquiry methods in software usability evaluation. The paradigm of LvS educational software

Date04 May 2010
Published date04 May 2010
DOIhttps://doi.org/10.1108/13287261011042921
Pages120-139
AuthorNektarios Kostaras,Dimitris Stavrinoudis,Stavroula Sokoli,Michalis Xenos
Subject MatterInformation & knowledge management
JSIT
12,2
120
Journal of Systems and Information
Technology
Vol. 12 No. 2, 2010
pp. 120-139
#Emerald Group Publishing Limited
1328-7265
DOI 10.1108/13287261011042921
Combining experimental and
inquiry methods in software
usability evaluation
The paradigm of LvS educational software
Nektarios Kostaras, Dimitris Stavrinoudis,
Stavroula Sokoli and Michalis Xenos
Hellenic Open University, Patra, Greece
Abstract
Purpose – The purpose of this paper is to present a methodology combining exp erimental and
inquiry methods used for software usability evaluation. The software product of LeViS project
funded by the European Commission (Socrates/Lingua II) is used as an evaluation paradigm. The aim
of the paper is twofold: to present the results of the usability evaluation using this software as an
example and to suggest a number of improvements for the next version of the software tool; and
to portray the advantages of combining methods from different evaluation approaches and the
experiences from their application.
Design/methodology/approach – The evaluation for this experiment combined different usability
methods, both experimental and inquiry ones. More specifically, the methods employed were the
Thinking Aloud Protocol and the User Logging, which were performed in a usability evaluation
laboratory, as well as the inquiry methods of Interview and Focus Group.
Findings – In this study, usabilityproblems regarding the Lear ning via Subtitling (LvS) educational
software were revealed as well as issues regarding the use of Thinking Aloud Protocol method and
involving users with a specific profile. The research findings presented in this paper constitute an
innovative and effective methodology for software usability evaluation and are useful forlaboratories
aiming to conduct similar evaluations.
Research limitations/implications – Although this methodology has been successfully applied
for over 20 software products, due to practical purposes related to this paper’s extent, only one
software is used as an example.
Originality/value – Through the evaluation process, apart from discovering certain usability
problems related to the software, there are a number of important conclusions drawn, regarding the
methods used and the methodology followed in software usability evaluation.
Keywords Computer software, User studies, Learning processes, Video
Paper type Research paper
1. Introduction
The development of effective interactive software involves substantial use of
evaluation experiments throughout the process (Sharp et al., 2007; Schneider man,
1998). Usability evaluation is an increasingly important part of the user interface
design process. However, usability evaluation can be expensive in terms of time and
human resources, and automation is therefore a promising way to augment existing
approaches. According to ISO9241-11 standard (ISO9241-11, 1997), usability is the
extent to which a computer system enables users, in a given context of use, to achieve
specified goals effectively and efficiently while promoting feelings of satisfaction.
Usability evaluation consists of methodologies for measuring the usability aspects of a
system’s user interface and identifying specific problems. In other words, it is an important
part of the overall user interface design process, which consists of iterative cycles of
designing, prototyping, and evaluating (Matera et al., 2006; Dix et al., 2004; Nielsen, 1993).
It is a process that entails many activities depending on the methods employed.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1328-7265.htm
The paradigm of
LvS educational
software
121
A wide range of usability evaluation methods and techniques have been proposed,
and a subset of these is currently in common use. Some of them, such as formal user
testing, can only be applied after the interface design or prototype has been
implemented. Others, such as heuristic evaluation, can be applied in the early stages of
design. Furthermore, usability findings can vary widely when different evaluators
study the same user interface, even if they use the same evaluation technique (Molich
et al., 1999; Jeffries et al., 1991).
This paper presents the usability evaluation of the Learning via Subtitling (LvS)
tool, a subtitling simulator developed within the framework of the LeViS project
funded by the European Union (Socrates/Lingua II). Mo re specifically, it presents
the experimental plan and the results of the evaluation. The main goals are to comp are
the results derived from the different evaluation methods, to investigate the possible
correlation and overlapping between them and to propose cases where these different
methods may be combined. Another research goal is to investigate issues regarding the
utilization of the methods employed and the participation of users with a specific
profile. In this experiment, both experimental and inquiry evaluation methods were
used. The work presented in this paper was conducted by the Software Quality
Research Group (SQRG, 2009) of the Hellenic Open University (HOU, 2009).
In the next section, the most popular evaluation methods are reported and
categorized. Section 3 presents the LvS project and the LvS software. In section 4, the
experimental plan of the reported evaluation is analysed, whereas in section 5 the
results of this experiment are demonstrated. Section 6 presents the corrected version
of the LvS software after its usability evaluation. Finally, section 7 summarizes the
conclusions reached as a result of this research.
2. Evaluation methods
The evaluationmethods can be generally dividedinto analytic and empiric ones (Nielsen,
1993). The analytic methods are theoretical models, rules, or standards that simulate
user’s behaviour. They are mainly used during the requirement analysis phase and
usually even before the development of the prototypes of a product. As a result, the
users’ participation is not required in these methods. These methods can be generally
characterized as cost effective, since they require a limited number of participants
(experts in the fieldsof usability evaluationand HCI), which can perform the assessment
processin a limited amountof time, without the employment of specializedequipment.
On the contrary, the empiric methods depend on the implementation, the evaluation,
and the rating of a software prototype or product. In this rating, it is necessary for the
participation of a representative sample of the end-users or/and a number of
experienced evaluators of the quality of a software product. The empiric methods can
be further divided into experimental and inquiry ones.
Experimental methods require the participation of selected end-u sers in a specially
set up controlled environment (e.g. usability laboratory) and the employment of
specialized equipment. The most known of these methods are the following (Tselios
et al., 2008; Avouris, 2001):
.Performance measurement: it is a classical method of software evaluation that
provides quantitative measurements of a software product performance when
users execute predefined actions or even complete operations.
.Thinking aloud protocol: this method focuses on the measurement of the
effectiveness of a system and the user’s satisfaction. According to this method,

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT