EVALUATION OF DOCUMENT SEARCHING SYSTEMS AND PROCEDURES

Date01 April 1965
Published date01 April 1965
DOIhttps://doi.org/10.1108/eb026375
Pages261-266
AuthorHELEN L. BROWNSON
Subject MatterInformation & knowledge management,Library & information science
EVALUATION OF DOCUMENT
SEARCHING SYSTEMS AND PROCEDURES
HELEN L. BROWNSON
Program
Director for
Research
and Studies,
National
Science
Foundation, Washington
INTEREST in the objective testing and evaluation of document searching
systems and procedures
has
grown steadily during the past
decade.
The rea-
son for such interest
is
perhaps obvious: a great deal of attention has been,
and is being, given to the development of new methods, including mech-
anized methods, for storing and searching characterizations of scientific and
technical documents. To determine the effectiveness and utility of these
new methods, particularly in comparison with the more conventional
methods still in use, we need objective means of assessing their perform-
ance.
Although some progress has been made, much remains to be done on
the development of evaluation methods and criteria, a high priority area of
study in the view of many individuals and organizations.
One of the first published statements about the importance of evaluation
appeared in April
1955
in an editorial by Jesse H. Shera, editor ofAmerican
documentation:
'Cautious and searching evaluation of
all
experimental re-
sults is essential in rating the efficiency of documentation systems.' He
urged that we 'regard documentation systems as useful devices the benefits
of which must be determined, not by polemics but by the intelligent mea-
surement of such benefits in relation to needs and costs.'
On the other side of the Atlantic, the Aslib Aeronautical Group Com-
mittee issued a brief mimeographed report on a 'Programme for Research
into Information Retrieval' dated 25 August
1955,
which quoted a passage
from the above editorial and commented that the changed attitude called
for must prevail. The interest of the Aeronautical Group in testing and
evaluation led eventually to the awarding of
a
grant to
Aslib
by the National
Science Foundation (NSF) for a comparative study of four indexing and
classification systems at the Cranfield College of Aeronautics in the period
1957-61 under the direction of Cyril W. Cleverdon.1,2 It was a pioneering
project in the sense that it was the first attempt to control the application of
several different indexing systems to the same body of documents, to con-
duct test searches of the indexed material, and to analyse the search failures.
The principal objective
was
to measure one aspect of the performance of the
261

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT