edited by Ricardo Corrêa, Inês Dutra, Mario Fiallos, Fernando Gomes.
Boston, MA :
Imprint: Springer,
2002.
Applied Optimization,
67
1384-6485 ;
1 Introduction to the Complexity of Parallel Algorithms -- 2 The Combinatorics of Resource Sharing -- 3 On Solving the Static Task Scheduling Problem for Real Machines -- 4 Predictable Parallel Performance: The BSP Model -- 5 Discrete computing with CGM -- 6 Parallel Graph Algorithms for Coarse-Grained Multicomputers -- 7 Parallel metaheuristics for combinatorial optimization -- 8 Parallelism in Logic Programming and Scheduling Issues -- 9 Parallel Asynchronous Team Algorithms -- 10 Parallel Numerical Methods for Differential Equations.
0
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications.