from algorithms to programming on state-of-the-art platforms /
First Statement of Responsibility
Roman Trobec, Boštjan Slivnik, Patricio Bulić, Borut Robič.
PHYSICAL DESCRIPTION
Specific Material Designation and Extent of Item
1 online resource (xii, 256 pages) :
Other Physical Details
illustrations (some color).
SERIES
Series Title
Undergraduate topics in computer science,
ISSN of Series
1863-7310
INTERNAL BIBLIOGRAPHIES/INDEXES NOTE
Text of Note
Includes bibliographical references and index.
CONTENTS NOTE
Text of Note
Part I: Foundations -- Why Do We Need Parallel Programming -- Overview of Parallel Systems -- Part II: Programming -- Programming Multi-Core and Shared Memory Multiprocessors Using OpenMP -- MPI Processes and Messaging -- OpenCL for Massively Parallel Graphic Processors -- Part III: Engineering -- Engineering: Parallel Computation of the Number ? -- Engineering: Parallel Solution of 1-D Heat Equation -- Engineering: Parallel Implementation of Seam Carving -- Final Remarks and Perspectives -- Appendix A: Hints for Making Your Computer a Parallel Machine.
0
SUMMARY OR ABSTRACT
Text of Note
Advancements in microprocessor architecture, interconnection technology, and software development have fueled rapid growth in parallel and distributed computing. However, this development is only of practical benefit if it is accompanied by progress in the design, analysis and programming of parallel algorithms. This concise textbook provides, in one place, three mainstream parallelization approaches, Open MPP, MPI and OpenCL, for multicore computers, interconnected computers and graphical processing units. An overview of practical parallel computing and principles will enable the reader to design efficient parallel programs for solving various computational problems on state-of-the-art personal computers and computing clusters. Topics covered range from parallel algorithms, programming tools, OpenMP, MPI and OpenCL, followed by experimental measurements of parallel programs' run-times, and by engineering analysis of obtained results for improved parallel execution performances. Many examples and exercises support the exposition.