Parallelization and analysis of selected numerical algorithms using OpenMP and Pluto on symmetric multiprocessing machine

Published date04 February 2019
Pages20-32
DOIhttps://doi.org/10.1108/DTA-05-2018-0040
Date04 February 2019
AuthorTanvir Habib Sardar,Ahmed Rimaz Faizabadi
Subject MatterLibrary & information science,Librarianship/library management,Library technology,Information behaviour & retrieval,Metadata,Information & knowledge management,Information & communications technology,Internet
Parallelization and analysis
of selected numerical
algorithms using OpenMP and
Pluto on symmetric
multiprocessing machine
Tanvir Habib Sardar
School of Engineering and Technology, Jain University, Bengaluru, India, and
Ahmed Rimaz Faizabadi
P.A. College of Engineering, Mangalore, India
Abstract
Purpose In recent years, there is a gradual shift from sequential computing to parallel computing.
Nowadays, nearly all computers are of multicore processors. To exploit the available cores, parallel
computing becomes necessary. It increases speed by processing huge amount of data in real time.
The purpose of this paper is to parallelize a set of well-known programs using different techniques to
determine best way to parallelize a program experimented.
Design/methodology/approach A set of numeric algorithms are parallelized using hand parallelization
using OpenMP and auto parallelization using Pluto tool.
Findings The work discovers that few of the algorithms are well suited in auto parallelization using Pluto
tool but many of the algorithms execute more efficiently using OpenMP hand parallelization.
Originality/value The work provides an original work on parallelization using OpenMP programming
paradigm and Pluto tool.
Keywords Algorithms, OpenMP, Auto parallelization, Code parallelization, Hand parallelization, Pluto
Paper type Research paper
1. Introduction
Parallelization is the act of designing a computer program to process data in parallel.
Without parallelization, computer programs compute data serially, they solve one problem
and then move to the next. If a computer program is parallelized, it breaks a problem down
into smaller pieces that can each independently be solved at the same time by discrete
computing resources (Thakkar et al., 2017). The parallel machines are being built to
satisfy the increasing demand of higher performance for numerous applications. Multi and
many-core architectures are becoming a hot topic in the fields of computer architecture and
high-performance computing (Culler et al., 1999). The availability of multicore processor has
made the programmer to change their way of computing. While creating a new application,
the programmer prefers parallel computing to obtain the efficient performance in valuable
time and to utilize the available cores. There is a need to either re-write them or convert them
to parallel using some tools.
There are different automatic parallel tools built depending on the hardware
architecture, memory architecture, data and control dependencies in the software. Each
tool differs from other based on the application they parallelize. These tools reduce the
manual analysis burden, time and effort (Athavale et al., 2011). Automatic parallelization of
sequential programs consists of two components: extraction of parallelism and generation of
parallel code for the target architecture. Techniques have been developed to transform
sequential code and extract parallelism out of it automatically (Thouti and Sathe, 2013),
which can be used to generate parallel code for any parallel architecture. The parallel code
Data Technologies and
Applications
Vol. 53 No. 1, 2019
pp. 20-32
© Emerald PublishingLimited
2514-9288
DOI 10.1108/DTA-05-2018-0040
Received 8 May 2018
Revised 8 November 2018
Accepted 16 November 2018
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2514-9288.htm
20
DTA
53,1

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT