The reader as subjective entropy: a novel analysis of multimodal readability

Date12 July 2022
Pages415-430
DOIhttps://doi.org/10.1108/JD-04-2022-0094
Published date12 July 2022
Subject MatterLibrary & information science,Records management & preservation,Document management,Classification & cataloguing,Information behaviour & retrieval,Collection building & management,Scholarly communications/publishing,Information & knowledge management,Information management & governance,Information management,Information & communications technology,Internet
AuthorAmanda S. Hovious,Brian C. O'Connor
The reader as subjective entropy:
a novel analysis of
multimodal readability
Amanda S. Hovious
University of North Texas, Denton, Texas, USA, and
Brian C. OConnor
Department of Information Science,
University of North Texas, Denton, Texas, USA
Abstract
Purpose The purpose of this study was to explore the viability of transinformation analysis as a multimodal
readability metric. A novel approach was called for, considering that existing and established readability
metrics are strictly used to measure linguistic complexity. Yet, the corpus of multimodal literature continues to
grow, along with the need to understand how non-linguistic modalities contribute to the complexity of the
reading experience.
Design/methodology/approach In this exploratory study, think aloud screen recordings of eighth-grade
readers of the born-digital novel Inanimate Alice were analyzed for complexity,along with transcripts of post-
oral retellings. Pixel-level entropy analysis served as both an objective measure of the document and a
subjective measure of the amount of reader information attention. Post-oral retelling entropy was calculated at
the unit level of the word, serving as an indication of complexity in recall.
Findings Findings confirmed that transinformation analysis is a viable multimodal readability metric.
InanimateAlice is an objectivelycomplex document, creating a subjectively complex reading experience for the
participants. Readers largely attended to the linguistic mode of the story, effectively reducing the amount of
information they processed. This was also evident in the brevity and below average complexity of their post-
oral retellings, which relied on recall of the linguistic mode. There were no significant group differences among
the readers.
Originality/value This is the first study that uses entropy to analyze multimodal readability.
Keywords Transinformation analysis, Information theory, Entropy, Multimodal readability
Paper type Research paper
Introduction
The digital age has produced a proliferation of born-digital documents that are by nature
multimodal. This includes digital multimodal literature, which conveys narrative through
any combination of visual, audio, spatial, linguistic, and/or gestural modes of communication
(Jewitt, 2017;Kress and van Leeu wen, 2006). Multimodal literature is increasingly
championed by literacy educators to support the development of new ways of reading
(Jimenez et al., 2017;Lenters, 2018;Serafini et al., 2020). However, the widely-adopted Common
Core State Standards recommends the use of readability metrics as a way to tailor reading
task complexity to the intrinsic cognitive load of the student (Achieve, 2019;Lupo et al., 2019).
Because classroom literature is traditionally language-based, commonly used readability
metrics are solely focused on the structural complexity of language (Flesch, 1948;Smith et al.,
1989). There are no standard metrics for analyzing multimodal readability (Serafini et al.,
2018). As such, new methods are needed to quantify the complexity of multimodal documents
beyond the word. One potential solution lies in Weltners (1973) transinformation analysis,
which offers a way to measure readability based on the Shannon (1948) entropy metric of
information.
Entropy is defined as the degree of uncertainty in a system (Bossomaier et al., 2016).
Shannon (1948) referred to the measure of uncertainty in a message as information entropy.
Multimodal
readability
analysis
415
The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/0022-0418.htm
Received 29 April 2022
Revised 21 June 2022
Accepted 23 June 2022
Journal of Documentation
Vol. 79 No. 2, 2023
pp. 415-430
© Emerald Publishing Limited
0022-0418
DOI 10.1108/JD-04-2022-0094
He theorized that the probability of choice in message selection is dependent upon prior
choices, so that information entropy could be defined as the binary logarithm of probability in
information choice. Entropy, as a probability of choices, could then measure the degree of
uncertainty in a message. Expressed as a normalized measure from zero to one, zero indicates
no information entropy, while one indicates complete randomness or high entropy (OConnor
et al., 2008). Weltner (1973) utilized information entropy to measure learning from lectures in
the classroom and further proposed its use to measure the readability of educational texts.
Weltner (1973) defined readability as the interaction between a reader and a given text(p.
120). Defining it this way is particularly relevant to multimodal reading because digital
multimodaltextsare interactive(Kress and van Leeuwen, 2006) ,as are th ecogn itivep rocessesof
reading (Wolf, 2008). However, the extent to which multimodality complicates the cognitive
processes of reading is still unknown, though scholars agree that reading in digital
environments is different from traditional print-text reading (Walsh, 2010;Wylie et al., 2018;
Wolf, 2008,2018). Furthermore, emerging neuroscientific evidence suggests that the multimodal
integration of linguistic and nonlinguistic information is necessary for meaning making to occur
during reading, even at the word and sentence level (Andersonet al.,2019). As such ,both the te xt
and the reader become essential components in the analysis of multimodal readability.
In the context of information theory, Weltners (1973) definition of readability reflects
Shannon and Weavers (1949) vision of communication as an interrelated set of technical,
semantic, and effectiveness problems. Thus, readability is an interactional process that
positions the reader as the message recipient or as Drucker (2011) might describe, the
subject of the interface. The entropy or noise in the message becomes a factor that influences
the amount of information that the reader can process. Shannons (1951) predictability
experiments with the entropy of printed English helped to establish this connection.
Based on Shannons work, Hale (2006) developed a complexity metric for calculating the
predictability of a sentence. Calling it the entropy reduction hypothesis, he demonstrated a
relationship between the understandability of a sentence and the uncertainty within its
grammatical continuations as measured by repetition accuracy scores. Frank (2013) tested
the entropy reduction hypothesis and determined it to be a valid measure of cognitive load in
sentence comprehension. He concluded that the degree of surprise in a sentence is a
significant predictor of the amount of cognitive effort required to process the sentence.
Because reading sentences uses the same neural structures as interpreting non-language
symbols (Peelen and Downing, 2017), these findings have implications for the relationship
between entropy and cognition as an indicator of multimodal readability.
OConnor and Anderson (2019) assert that a relationship exists between entropy, meaning,
and cognition. Research on the structural characteristics of filmic documents supports this
understanding. For example, Watt and Krull (1974) mapped the entropy measures of filmic
documents to meaning through the emotional response of the audience. Six entropy
indicators in film were operationalized: (1) set time entropy, defined as the degree of
randomness of visual duration time of specific physical sites in a film; (2) set incidence
entropy, defined as the degree of randomness of the appearance of specific physical film sites;
(3) set constraint entropy, defined as the degree of randomness of its constraints; (4) verbal
time entropy, defined as the degree of randomness in time of the verbal behavior of characters
in a film; (5) incidence entropy, defined as the degree of randomness of performance of the
charactersverbal behavior; and (6) nonverbal dependence entropy, defined as the degree of
randomness of time of the non-verbalization of the characters.
Watt and Krull (1974) examined a sampling of prime time shows from three commercial
television networks. A factor analysis of the entropy indicators showed that two factors could
explain 76% of variation among five of the indicators. Verbal time entropy, verbal incidence
entropy, and set time entropy coded as the DYNAMICS factor were related to the amount
of visual and aural action taking place in the television program. A higher entropy measure
JD
79,2
416

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT