Parallel Programming with MPI

Peter S. Pacheco

Publisher: Morgan-Kaufmann, 1997, 418 pages

ISBN: 1-55860-339-5

Keywords: Programming

Last modified: May 28, 2021, 2:47 p.m.

This is the first introductory prallel programming book based on the Message-Passing Interface (MPI), the de facto industry standard adopted by major commercial vendors for programming parallel systems. Designed for use either as a self-paced tutorial for professionals or as a text for parallel programming/parallel computing courses, the book offers many fully developed examples to give readers hands-on experience programming parallel systems for computational intensive applications.

The portability and efficiency of MPI, combined with the thorough grounding in parallel programming principles presented here, enable readers to obtain high performance on any parallel system, from a network if workstations to a parallel supercomputer. The clear exposition and down-to-earth approach demistify the complex and sometimes intimidating task of parallel programming.

Parallel Programming with MPI features:

  • fully developed program examples to introduce each concept
  • extensive coverage of performance and debugging
  • presentation of a variety of approaches to the problem of basic I/O on parallel machines
  • a range of challenging exercises and programming assignments
  • portable source code online in both C and Fortran
  1. Introduction
    1. The Need for More Computational Power
    2. The Need for Parallel Computing
    3. The Bad News
    4. MPI
    5. The Rest of the Book
    6. Typographic Conventions
  2. An Overview of Parallel Computing
    1. Hardware
      1. Flynn's Taxonomy
      2. The Classical von Neumann Machine
      3. Pipeline and Vector Architectures
      4. SIMD Systems
      5. General MIMD Systems
      6. Shared-Memory MIMD
      7. Distributed-Memory MIMD
      8. Communication and Routing
    2. Software Issues
      1. Shared-Memory Programming
      2. Message Passing
      3. Data-Parallel Languages
      4. RPC and Active Messages
      5. Data Mapping
    3. Summary
    4. References
    5. Exercises
  3. Greetings!
    1. 3.1 The Program
    2. 3.2 Execution
    3. 3.3 MPI
      1. 3.3.1 General MPI Programs
      2. 3.3.2 Finding Out about the Rest of the World
      3. 3.3.3 Message: Data + Envelope
      4. 3.3.4 Sending Messages
    4. 3.4 Summary
    5. 3.5 References
    6. Exercises
    7. Programming Assignment
  4. An Application: Numerical Integration
    1. The Trapezoidal Rule
    2. Parallelizing the Trapezoidal Rule
    3. I/O on Parallel Systems
    4. Summary
    5. References
    6. Exercises
    7. Programming Assignments
  5. Collective Communication
    1. Tree-Structured Communication
    2. Broadcast
    3. Tags, Safety, Buffering, and Synchronization
    4. Reduce
    5. Dot Product
    6. Allreduce
    7. Gather and Scatter
    8. Summary
    9. References
    10. Exercises
    11. Programming Assignments
  6. Grouping Data for Communication
    1. The Count Parameter
    2. Derived Types and MPI_Type_struct
    3. Other Derived Datatype Constructors
    4. Type Matching
    5. Pack/Unpack
    6. Deciding Which Method to Use
    7. Summary
    8. References
    9. Exercises
    10. Programming Assignments
  7. Communicators and Topologies
    1. Matrix Multiplication
    2. Fox's Algorithm
    3. Communicators
    4. Working with Groups, Contexts, and Communicators
    5. MPI_Comm_split
    6. Topologies
    7. MPI_Cart_sub
    8. Implementation of Fox's Algorithm
    9. Summary
    10. References
    11. Exercises
    12. Programming Assignments
  8. Dealing with I/O
    1. Dealing with stdin, stdout, and stderr
      1. Attribute caching
      2. Callback Functions
      3. Identifying the I/O process rank
      4. Caching an I/O Process Rank
      5.  Retrieving the I/O Process Rank
      6. Reading from stdin
      7. Writing to stdout
      8. Writing to stderr and Error Checking
    2. Limited Access to stdin
    3. File I/O
    4. Array I/O
      1. Data Distributions
      2. Model Problem
      3. Distribution of the Input
      4. Derived Datatypes
      5. The Extent of a Derived Datatype
      6. The Input Code
      7. Printing the Array
      8. An Example
    5. Summary
    6. References
    7. Exercises
    8. Programming Assignments
  9. Debugging Your Program
    1. Quick Review of Serial Debugging
      1. Examine the Source Code
      2. Add Debugging Output
      3. Use a Debugger
    2. More on Serial debugging
    3. Parallel Debugging
    4. Nondeterminism
    5. An Example
      1. The Program?
      2. Debugging The Program
      3. A Brief Discussion of Parallel Debuggers
      4. The Old Standby: printf/fflush
      5. The Classical Bugs in Parallel Programs
      6. First Fix
      7. Many Parallel Programming Bugs are Really Serial Programming Bugs
      8. Different Systems, Different Errors
      9. Moving to Multiple Processes
      10. Confusion about I/O
      11. Finishing Up
    6. Error Handling in MPI
    7. Summary
    8. References
    9. Exercises
    10. Programming Assignments
  10. Design and Coding of Parallel Programs
    1. Data-Parallel Programs
    2. Jacobi's Method
    3. Parallel Jacobi's Method
    4. Coding Parallel Programs
    5. An Example: Sorting
      1. Main Program
      2. The "Input" Functions
      3. All-to-all Scatter/Gather
      4. Redistributing the Keys
      5. Pause to Clean Up
      6. Find_alltoall_send_params
      7. Finishing Up
    6. Summary
    7. References
    8. Exercises
    9. Programming Assignments
  11. Performance
    1. Serial Program Performance
    2. An Example: The Serial Trapezoidal Rule
    3. What about the I/O?
    4. Parallel Program Performance Analysis
    5. The Cost of Communication
    6. An Example: The Parallel Trapezoidal Rule
    7. Taking Timings
    8. Summary
    9. References
    10. Exercises
    11. Programming Assignments
  12. More on Performance
    1. Amdahl's Law
    2. Work and Overhead
    3. Sources of Overhead
    4. Scalability
    5. Potential Problems in Estimating Performance
      1. Networks of Workstations and Resource Contention
      2. Load Balancing and Idle Time
      3. Overlapping Communication and Computation
      4. Collective Communication
    6. Performance Evaluation Tools
      1. MPI's Profiling Interface
      2. Upshot
    7. Summary
    8. References
    9. Exercises
    10. Programming Assignments
  13. Advanced Point-to-Point Communication
    1. An Example: Coding Allgather
      1. Function Parameters
      2. Ring Pass Allgather
    2. Hypercubes
      1. Additional Issues in the Hypercube Exchange
      2. Details of the Hypercube Algorithm
    3. Send-receive
    4. Null Processes
    5. Nonblocking Communication
      1. Ring Allgather with Nonblocking Communication
      2. Hypercube Allgather with Nonblocking Communication
    6. Persistent Communication Requests
    7. Communication Modes
      1. Synchronous Mode
      2. Ready Mode
      3. Buffered Mode
    8. The Last Word on Point-to-Point Communication
    9. Summary
    10. References
    11. Exercises
    12. Programming Assignments
  14. Parallel Algorithms
    1. Designing a Parallel Algorithm
    2. Sorting
    3. Serial Bitonic Sort
    4. Parallel Bitonic Sort
    5. Tree Searches and Combinatorial Optimization
    6. Serial Tree Search
    7. Parallel Tree Search
      1. Par_dfs
      2. Service_requests
      3. Work_remains
      4. Distributed Termination Detection
    8. Summary
    9. References
    10. Exercises
    11. Programming Assignments
  15. Parallel Libraries
    1. 15.1 Using Libraries: Pro and Con
    2. 15.2 Using More than One Language
    3. 15.3 ScaLAPACK
    4. 15.4 An Example of a ScaLAPACK Program
    5. 15.5 PETSc
    6. 15.6 A PETSc Example
    7. 15.7 Summary
    8. 15.8 References
    9. 15.9 Exercises
    10. 15.10 Programming Assignments
  16. Wrapping Up
    1. Where to Go from Here
    2. The Future of MPI
  1. Summary of MPI Commands
    1. Point-to-Point Communication Functions
      1. Blocking Sends and Receives
      2. Communication Modes
      3. Buffer Allocation
      4. Nonblocking Communication
      5. Probe and Cancel
      6. Persistent Communication Requests
      7. Send-receive
    2. Derived Datatypes and MPI_Pack/Unpack
      1. Derived Datatypes
      2. MPI_Pack and MPI_Unpack
    3. Collective Communication Functions
      1. Barrier and Broadcast
      2. Gather Scatter
      3. Reduction Operations
    4. Groups, Contexts, and Communicators
      1. Group Management
      2. Communicator Management
      3. Inter-communicators
      4. Attribute Caching
    5. Process Topologies
      1. General Topology Functions
      2. Cartesian Topology Management
      3. Graph Topology Management
    6. Environmental Management
      1. Implementation Information
      2. Error Handling
      3. Timers
      4. Startup
    7. Profiling
    8. Constants
    9. Type Definitions
  2. MPI on the Internet
    1. Implementations of MPI
    2. The MPI FAQ
    3. MPI Web Pages
    4. MPI Newsgroup
    5. MPI-2 and MPI-IO
    6. Parallel Programming with MPI

Reviews

Parallel Programming with MPI

Reviewed by Roland Buresund

Good ******* (7 out of 10)

Last modified: May 21, 2007, 3:16 a.m.

A good book if you want to learn to program with MPI. Of course, you need to understand parallel programming in general, first.

Comments

There are currently no comments

New Comment

required

required (not published)

optional

required

captcha

required