3MO-AHP: an inconsistency reduction approach through mono-, multi- or many-objective quality measures

DOIhttps://doi.org/10.1108/DTA-11-2021-0315
Published date18 February 2022
Date18 February 2022
Pages645-670
Subject MatterLibrary & information science,Librarianship/library management,Library technology,Information behaviour & retrieval,Metadata,Information & knowledge management,Information & communications technology,Internet
AuthorCarla Martins Floriano,Valdecy Pereira,Brunno e Souza Rodrigues
3MO-AHP: an inconsistency
reduction approach through mono-,
multi- or many-objective
quality measures
Carla Martins Floriano, Valdecy Pereira and
Brunno e Souza Rodrigues
Department of Production Engineering, Universidade Federal Fluminense,
Niter
oi, Brazil
Abstract
Purpose Although the multi-criteria technique analytic hierarchy process (AHP) has successfully been
applied in many areas, either selecting or ranking alternatives or to derive priority vector (weights) for a set of
criteria, there is a significant drawback in using this technique if the pairwise comparison matrix (PCM) has
inconsistent comparisons, in other words, a consistency ratio (CR) above the value of 0.1, the final solution
cannot be validated. Many studies have been developed to treat the inconsistency problem, but few of them
tried to satisfy different quality measures, which are minimum inconsistency (fMI), the total number of adjusted
pairwise comparisons (fNC), original rank preservation (fKT), minimum average weights adjustment (fWA) and
finally, minimum L1 matrix norm between the original PCM and the adjusted PCM (fLM).
Design/methodology/approach The approach is defined in four steps: first, the decision-maker should
choose which quality measures she/he wishes to use, ranging from one to all quality measures. In the second
step, the authors encode the PCM to be used in a many-objective optimization algorithm (MOOA), and each
pairwise comparison can be adjusted individually. The authors generate consistent solutions from the obtained
Pareto optimal front that carry the desired quality measures inthe third step. Lastly, the decision-maker selects
the most suitable solution for her/his problem. Remarkably, as the decision-maker can choose one (mono-
objective), two (multi-objective), three or more (many-objectives) quality measures, not all MOOAs can handle
or perform well in mono- or multi-objective problems. The unified non-sorting algorithm III (U-NSGA III) is the
most appropriate MOOA for this type of scenario because it was specially designed to handle mono-, multi- and
many-objective problems.
Findings The use of two quality measures should not guarantee that the adjusted PCM is similar to the
original PCM; hence, the decision-maker should consider using more quality measures if the objective is to
preserve the original PCM characteristics.
Originality/value For the first time, a many-objective approach reduces the CR to consistentlevels with the
ability to consider one or more quality measures and allows the decision-maker to adjust each pairwise
comparison individually.
Keywords Analytic hierarchy process, Inconsistency reduction, Many-objective optimization algorithm,
U-NSGA III
Paper type Research paper
1. Introduction
The analytic hierarchy process (AHP) proposed by Saaty (1977,1980) is recognized as the
most popular multi-criteria decision-making technique. The technique uses a reciprocal
decision matrix obtained by pairwise comparisons between each criterion (Alonso and
Lamata, 2006;Siraj et al., 2012b). In addition, Ishizaka and Labib (2011) demonstrate that it
has successfully been applied in many different areas, such as firmscompetence evaluation,
weapon selection, drug selection, site selection, software evaluation, strategy selection,
supplier selection, staff recruitment and many others.
While constructing the pairwise comparison matrix (PCM), an experts opinion is most
commonly taken as the central pillar of this process. Yet, it is known that opinions, experience
and intuition are human traits that can be misleading and, generally, a source of error.
A many-
objective
optimization
algorithm
645
The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/2514-9288.htm
Received 10 November 2021
Revised 17 January 2022
Accepted 29 January 2022
Data Technologies and
Applications
Vol. 56 No. 5, 2022
pp. 645-670
© Emerald Publishing Limited
2514-9288
DOI 10.1108/DTA-11-2021-0315
Saaty (1977,1980) indicated that these errors do not invalidate the AHP results if an indicator,
called consistent ratio (CR), is equal to or less the value of 0:1(Kazibudzki, 2019). The
literature shows many other indicators that can be used to show inconsistency, for instance,
as pointed out by Brunelli (2017),Grzybowski (2016) and Kazibudzki (2016), the Koczkodajs
Inconsistency Index (Koczkodaj, 1993), 3-way cycles (Kwiesielewicz and Uden, 2004)or
Geometric Consistency Index (Aguar
on and Moreno-Jim
enez, 2003). Nonetheless, the CR
prevails as the most used indicator.
Hence, suppose that an expert generated an inconsistent PCM. In that case, two
possibilities may arise: (1) Initiate a new interview with the expert and, through trial and
error, attempt to reduce the value of the CR to acceptable levels. (2) Mathematically find an
adjusted PCM that is as close as possible to the original PCM.
It can be noted that the first option to reduce the CR to acceptable levels is inherently a
very time-consuming task, and depending on the experts availability or mood, it cannot
always be a reliable option. On the other hand, the second option eliminates the necessity to
access the expert one more time; however, it has another challenge: creating an adjusted PCM
close enough to the original PCM. And what would be close enoughin this context? It means
that it satisfies a condition or conditions quality measures that can assure to decision-
maker that her/his adjusted PCM could be consistent and still capture all the essence of the
expert knowledge about the problem.
Mazurek et al. (2021), in their research, indicated the five most common quality measures,
which are (1) minimum inconsistency (fMI), (2) the total number of adjusted pairwise
comparisons (fNC), (3) original rank preservation (fKT), (4) minimum average weights
adjustment (fWA) and, finally, (5) minimum L1 matrix norm between the original PCM and the
adjusted PCM (fLM).
The quality measure fMI is straightforward to understand since it represents the
inconsistency of the adjusted PCM, and ideally, the lower this value, the better. The quality
measure fNC represents the number of original comparisons that are tweaked to achieve
consistency. Again, to preserve as much as possible the original PCM, the lower this value, the
better. The quality measure fKT represents the correlation between the original PCMs rank
order and the adjusted PCMs rank order. The Kendall tau rank correlation ð
τ
Þcan evaluate
the degree of similarity between two sets of ranks (Abdi, 2007). It assumes the value of 1
when the ranks have a reversed order. It takes the value of 0 when the ranks are entirely
different, and it takes the value of 1when the ranks are equal. To preserve as much as possible
the original PCM, the higher this value, the better. The quality measure fWA calculates the
mean absolute difference between the derived weights of the original PCM and the derived
weights of the adjusted PCM, having the limits 0 fWA ≤∞. Preserving as much as possible
of the original PCM means that this value should be as close as possible to zero. Finally, the
quality measure fLM compares how much different is the adjusted PCM from the original
PCM; this is achieved by taking the L1 matrix norm from both matrices and having the limits
0fLM ≤∞implies that preserving the original PCM as much as possible means that this
value should be as close as possible to zero.
Other quality measures could be considered, and the decision-maker is free to define his/
her quality measures. However, for this study, the quality measures mentioned in the
previous paragraph should suffice to evaluate an adjusted PCM robustly. Nonetheless, by
reimagining the quality measures as functions that can be optimized, we can transform the
consistency reduction problem into a mono-, multi- or many-objective optimization problem
depending only on the number of quality measures the decision-maker is willing to take. A
mono-objective problem can find a unique solution that may be optimal when considering one
of the quality measures and completely unsatisfactory when considering the others.
Nevertheless, multi- or many-objective problems go the other way around. Naturally, they do
not produce a unique solution but a set of solutions distributed along the Pareto optimal front.
DTA
56,5
646

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT