User Evaluation of Expert Systems

Date01 June 1992
Pages17-23
DOIhttps://doi.org/10.1108/02635579210015392
Published date01 June 1992
AuthorPatricia L. Rees
Subject MatterEconomics,Information & knowledge management,Management science & operations
User
Evaluation
of Expert
Systems
Patricia L. Rees
USER EVALUATION OF EXPERT SYSTEMS 17
E
nsuring that the aims of expert systems
designers and users are congruent helps
to promote successful computer use.
Industrial Management & Data Systems, Vol. 92 No. 6, 1992, pp.
17-23,
© MCB University Press Limited, 0263-5577
Introduction
This article is an attempt to link evaluation research and
user involvement in the design and implementation of
Expert Systems (ES), in the hope of adding to the
development of each.
The nature of evaluation research is well documented by
Legge[l].
She identifies three crises in evaluation research:
utilization, verification and accreditation. The problem is
that, unless the results of evaluation research are verifiable,
credible and used, there is little future for the subject. To
lessen these crises, she advocates a better match between
what the instigator of
the
evaluation research actually wants
and the evaluation research
itself.
She also suggests that
a more incremental approach to the utilization of evaluation
research findings be recognized. It cannot be expected that
evaluation research findings are going to have an immediate
and dramatic impact.
A similar situation characterizes the state of user
involvement in the design and implementation of expert
systems. Many of the excuses for not utilizing evaluation
research (findings are addressed to the wrong people; the
individual who commissioned the evaluation has moved on;
people have their own political axes to grind[l]); have their
parallels
in
the ES area (involving the users
will
be too costly
and take up too much time; computer technicians feel that
users are a nuisance; involving the users is too novel an
idea etc.). Consequently, user involvement as advocated
by Mumford[2] can also benefit from an incremental
approach. However, this article is not addressing user
involvement in the design and implementation of expert
systems rather it seeks, in its incremental way, to
combine the areas of evaluation and user involvement into
a framework for user evaluations of ES.
Evaluation practices in the area of new technology are
poorly
developed[3].
Hirschheim and Smithson[4] in their
review of evaluation practices, place them on a continuum
with highly rational/objective criteria at one end, and
subjective/political criteria at the other. This corresponds
to Legge's formal-informal continuum[l]. An example of
the formal end would be a Local Area Network evaluation
based on performance measures[5]. Moving along the
continuum slightly would be cost benefit analysis[6].
Towards the more informal end of the continuum would
be the evaluation research carried out in quality circles[7];
and the work of Blackler and Brown who advocate a
powerful psychological approach[8]. However, they are
aware that "psychological and social aspects are far less
tangible and may well appear irritating, if not threatening
to the technically minded"
[9].
Hirschheim and Smithson[4] reject the positivistic
approach to evaluation as being too simplistic. This is
because they see organizations within which the
information technology is being evaluated as too complex
for purely objective analysis.
The problems associated with user involvement have been
touched upon. In the main, there have to be certain
conditions present in an organization for such involvement
to be possible. These include: a champion for user
involvement; a suitable company culture; systems designers
who are interested in the users' needs; and interested
users.
It is difficult to get all the conditions present to
nurture user involvement. Nevertheless, in the author's
experience, some involvement is better than none. E.
Burton Swanson[10] neatly sums up all one can sometimes
hope for in the fields of evaluation and user involvement:
Whether "asking the user" is necessary or sufficient for
information system assessment is
no
doubt an issue subject
to continued debate. Choosing to
do so
is,
however, at least
in spirit, user friendly.
User Evaluations and Expert Systems
Expert systems are still a relatively new technology. They
are still surrounded by a certain amount of "hype" and
mystery. They are, therefore, a fashionable area for
research. Consequently, one of the major benefits of ES

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT