A socio‐technical framework for quality assessment of computer information systems

DOIhttps://doi.org/10.1108/02635570110394635
Published date01 July 2001
Date01 July 2001
Pages237-251
AuthorShailendra C. Palvia,Ravi S. Sharma,David W. Conrath
Subject MatterEconomics,Information & knowledge management,Management science & operations
A socio-technical framework for quality assessment
of computer information systems
Shailendra C. Palvia
College of Management, Long Island University, Brookville, USA
Ravi S. Sharma
Berkom R&D, Deutsche Telekom Asia, Singapore
David W. Conrath
School of Business, McMaster University, Hamilton, Ontario, Canada
Introduction
While the literature has stressed the
importance of conducting formal evaluations
of information systems to improve
information system performance and the
process of system development, actual
practice appears to be far removed from these
norms. In a seminal article, Kumar (1990)
concluded from his empirical evidence that
the major reason for conducting evaluations
was for project closure. Furthermore, over
60 percent of the responding organizations
indicated they conducted evaluations on less
than half of the computer-based systems they
developed. It would seem that information
system evaluation is not a common practice,
and when it is undertaken it is executed for
the wrong reasons.
Part of the problem may be attributed to
the widespread misconceptions about the
reasons for conducting systems evaluations
in practice. The post-implementation
evaluation of an information system is often
viewed as an exercise in determining its
implementation success. The concept of
``implementation success'' is itself laden with
confusion. An indication of this confusion is
revealed in the review of measures of
``information systems success'' by DeLone
and McLean (1992). Based on DeLone and
McLean's review of 180 citations from
articles in the ``the seven leading
publications in the I/S field'' from 1981 to
1987, there is substantial disagreement on
how ``success'' should be measured. If we do
not have a standard measure of
implementation success, it is difficult to
expect practitioners to undertake such
evaluations.
Undertaking post-implementation review
of an information system involves the
consideration of several issues. First, there
are compelling reasons to believe that
implementation success is a
multidimensional concept (Bailey and
Pearson, 1983; Delone and McLean, 1992).
Second, existing methods are ad hoc; there
is no underlying reference theory
(Srinivasan, 1985; Straub, 1989). Practices in
the field, therefore, lack a foundation of
conceptual framework. Third, the concept
of implementation success varies according
to one's perspective. Thus, the variations
among stakeholder groups (users,
developers, and managers) ought to be
taken into account (McLeod and Bender,
1988; Rai and Mendelow, 1989; O'Keefe,
1989). Fourth, the means used to establish
the evaluation instruments and procedures
should be based on sound methodological
practices (Bailey and Pearson, 1983; Saraph
et al., 1989). Fifth, the approach one should
take to information system evaluation is
likely to depend on the type of system to be
evaluated (Conrath and Dumas, 1989) as it is
highly unlikely that a single method could
be applied effectively to transaction
processing systems, decision support
systems, expert systems, and information
retrieval systems.
The remainder of the article is organized
as follows. In the next section, we examine
the notion of quality to provide definitional
clarity to the system evaluation objectives
and system evaluat ion process. This is
followed by an articulation of socio-
technical conceptual framework as the basis
for post-implementation evaluation of
information systems. In the subsequent
section, we describe the methodology utlized
for identifying a l ist of mutually exc lusive
and exhaustive factors (dimensions),
rationale for choosing 11 point scales to
evaluate information systems, and
organization of the field instrument itself.
Then, we cite evidence of validity and
reliability for our approach from a field test
that we carried out. We close by articulating
the implications of our findings for
The current issue and full text archive of this journal is available
at
http://www.emerald-library.com/ft
[ 237 ]
Industrial Management &
Data Systems
101/5 [2001] 237±251
#MCB University Press
[ISSN 0263-5577]
Keywords
Quality assurance,
Information systems, ISO 9000
Abstract
The emergence of total quality
management and the ISO 9000
suite of standards has allowed a
re-think of how (and why) the post-
implementation evaluation of
computer systems is to be carried
out. Traditional performance
measurement, modeling and
analysis techniques ± while not
discredited ± have been tempered
with a more holistic ideology. This
article recommends a socio-
technical approach to determining
the quality of a computer
information system. In this
context, two postulates have been
proposed and tested by field
survey of expert systems in the
insurance industry in North
America. Postulate one focuses on
a multidimensional concept of IS
quality comprising the
characteristics of task,
technology, people and
organization. Postulate two deals
with differences in assessments of
these characteristics according to
stakeholder groups: managers,
developers, and users.
Summarizes the key findings of
these postulates in the context of
the TQM and ISO 9000
philosophies.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT