Distributed OMP - A Programming Model For SMP Clusters

Mark Leair, John Merlin, Steven Nakamoto, Vincent Schuster, and Micheal Wolfe

The Fortran OpenMP API specifies a collection of compiler directives, library routines and environment variables that can be used to specify shared memory parallelism in Fortran programs (http://www.openmp.org). Strengths of OpenMP include incremental parallelization, portability, reasonable ease-of-use, a global namespace, an SPMD execution model and work-sharing with implicit and explicit synchronization constructs.

Two major weaknesses are that it is only applicable to shared-memory multiprocessor systems and offers verly limited capability to direct locality of data objects. Both of these can limit effective scaling.

Distributed OMP is a new high-level programming model that extends the OpenMP API with additional data mapping directives, library routines and environment variables that can eliminate these two major weaknesses. Distributed OMP relies on a hierarchical dual-level thread and process mechanism whereby lightweight threads are utilized within a collection of SMP processors under the control of higher-level node-processes communicating through one-way communications. Data distribution is added to the work-sharing capability of OpenMP. General compatibility with OpenMP is maintained while extending target platforms to effectively include large SMPs, distributed memory clusters and SMP clusters.