Objectivistic knowledge artifacts

Pages105-129
Date05 February 2018
DOIhttps://doi.org/10.1108/DTA-03-2017-0012
Published date05 February 2018
AuthorRosina O. Weber
Subject MatterLibrary & information science,Librarianship/library management,Library technology,Information behaviour & retrieval,Metadata,Information & knowledge management,Information & communications technology,Internet
Objectivistic knowledge artifacts
Rosina O. Weber
Department of Information Science, Drexel University, Philadelphia,
Pennsylvania, USA
Abstract
Purpose By establishing a conceptual path through the field of artificial intelligence for objectivistic
knowledge artifacts (KAs), the purposeof this paper is to propose an extensionto their design principles. The
author usesthese principles to deployKAs for knowledge acquiredin scientific processes, todetermine whether
these principlessteer the design of KAs that are amenable forboth human and computational manipulation.
Design/methodology/approach Adopting the design principles mentioned above, the author describes
the deployment of KAs in collaboration with a group of scientists to represent knowledge gained in scientific
processes. The author then analyzes the resulting usage data.
Findings Usage data reveal that huma n scientists could enter sc ientific KAs within the pro posed
structure. The scien tists were able to create associati ons among them, search and retrieve KAs, and reuse
them in drafts of reports to f unding agencies. These results were ob served when scientists were motivat ed
by imminent incentives.
Research limitations/implications Previous work has shown that objectivistic KAs are suitable for
representing knowledge in computational processes. The data analyzed in this work show that they are
suitable for representing knowledge in processes conducted by humans. The need for imminent incentives to
motivate humans to contribute KAs suggests a limitation, which may be attributed to the exclusively
objectivistic perspective in their design. The author hence discusses the adoption of situativity principles for a
more beneficial implementation of KAs.
Originality/value The suitability for interaction with both human and computational processes makes
objectivisticKAs candidates for use as metadatato intersect humans and computers,particularly for scientific
processes. The authorfound no previous work implementingobjectivistic KAs for scientific knowledge.
Keywords Knowledge management, Scientific knowledge, Artificial intelligence, Lessons-learned systems,
Objectivistic knowledge artifacts, Scientific knowledge artifacts
Paper type Research paper
1. Introduction
Knowledge and its artifacts are of interest to both social and computer sciences. Cabitza and
Locoro (2014) propose a conceptual framework for both perspectives, which they refer to as
situativity and objectivity. Cabitza and Locoro (2014), along with many others (e.g. Simone,
2015; Cabitza et al., 2013), describedthe socially situatedstance in detail. Further description of
the objectivity stance is still needed, particularly given the increasing ubiquity of
computational representations of socially motivated knowledge cycles. In the field of science
alone,where human activitiesare considered a bottleneckto scientific progress(Gil et al., 2014),
entire scientific steps including hypothesesgeneration are being automated( Bohannon, 2017).
This reality makes it urgent that both situativity and objectivity stances coexist to design
systems that guarantee that humans are kept in the loop and can understand what the
automated methods implement and the results they obtain.
This papers intended contribution is to extend previous work describing the
objectivistic stance through concepts conceived in the fields of artificial intelligence (AI) and
knowledge engineering (KE), and to propose its design principles, which are covered in
Section 2. In order to provide guidance to others attempting to utilize and develop such
Data Technologies and
Applications
Vol. 52 No. 1, 2018
pp. 105-129
© Emerald PublishingLimited
2514-9288
DOI 10.1108/DTA-03-2017-0012
Received 1 March 2017
Revised 4 September 2017
Accepted 5 September 2017
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2514-9288.htm
This work was supported by the US EPA-Science to Achieve Results (STAR) Program and the US
Department of Homeland Security Programs, Grant No. R83236201. The author thanks the members of
the CAMRA community, the invited editors, and the reviewers who helped improve this article. Special
thanks also to Adam J. Johs for his comments. The work described in the retrieval experiment was
conducted in the period from 2005 to 2011 under IRB Protocol No. 16449.
105
Objectivistic
knowledge
artifacts
artifacts, in Section 3, we summarize previous work introducing what we call generalized
objectivistic knowledge artifacts (KAs). These generalized KAs can be specialized as they
are fielded in specific domains. In Section 4, we describe our experience fielding objectivistic
KAs in collaboration with a group of scientists to create and reuse knowledge gained
in scientific processes, leading to what we call scientific knowledge artifacts (SKAs).
This description attempts to equip readers with enough information to field generalized
KAs here explained to other domains and user communities.
We report usage statistics of SKAs and analyze them in Section 5. Previous work has shown
that objectivistic KAs are amenable to computational manipulation (Weber and Aha, 2003).
The analysis in this work incorporates the amenability to being created and searched by
humans into objectivistic KAs. We observe, however, that SKAs have limitations due to
the purely objectivistic approach that do not take into account the situativity stance.
To conclude, we discuss how objectivity and situativity may be perceived as contrasting and
that therefore these two perspectives may not coexist well when adopting KAs.
2. The objectivity stance
This section proposes substantiation for the objectivity dimension by tracing the path taken
by the field of AI to computationally represent concepts and algorithms for the purpose of
performing knowledge tasks. Given that KAs per se are data structures consisting of
information fields, understanding the underlying principles of reasoning and
representations for reasoning will help us design artifacts that can be understood by
humans and easily manipulated by AI methodologies. It is necessary, however, that prior to
this, we make explicit the working definition of knowledge that has motivated such a path.
The conceptualization of knowledge adopted is crucial because it drives the underlying
strategy for managing it, which can in turn influence the perception of the role of
computational systems designed for it (e.g. Alavi and Leidner, 2001; Cabitza and Locoro,
2014). The role of computer systems determines the goals of KAs and consequently how to
evaluate them. We complement the underlying foundations of SKAs and include guidance
from the KM literature for repository-based systems. We conclude this section by presenting
the objectivistic design principles and quality metrics for objectivistic KAs.
2.1 Objectivistic definition of knowledge
The definition of knowledge proposed by Alavi and Leidner (2001) as a justified belief that
increases an entitys capacity for effective action(p. 109) is ideal to characterize the
objectivity dimension. First, it defines knowledge with respect to the final goal of promoting
effective actions, which is the same as AIs goal of rational behavior. This behavior-oriented
conceptualization of knowledge enables treating knowledge as a black boxprocess that
receives some input information and produces the expected behavior as output. To truly
comprehend the objectivity dimension, and the role of computer systems and KAs within it,
one should understand how AI utilizes knowledge representations and so-called knowledge
types to attain rational behavior.
2.2 Knowledge and the field of AI
The perspective of knowledge adopted by the field of AI is intrinsically related to the
definition of AI as a field. Russell and Norvig (2009) present various definitions of
knowledge from the AI literature that suggest multiple schools of thought within the field.
Of interest are the definitions that make reference to the final goal of AI, which vary with
respect to using humans as references of quality or not. The goal of AI involving mimicking
human intelligence is not relevant when designing systems to solve problems, but it is of
interest to studies in cognitive science. When the focus is on solving problems, the field of
AI has progressed toward the goal of producing rational behavior.
106
DTA
52,1

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT