Newsgroups: comp.parallel.mpi
From: sheffler@psd.RIACS.EDU (Thomas J Sheffler)
Subject: Re: C++ Binding of MPI
Organization: RIACS
Date: Fri, 5 May 1995 17:53:12 GMT
Message-ID: <SHEFFLER.95May5105312@psd.RIACS.EDU>


I was recently pointed to the Para++ project.  The essence of their
approach is to overload input and output operators (<<, >>) to do
communication.  Point-to-point and broadcast communication then looks
like standard C++ IO.  By overloading the operators, users can send
messages of heterogeneous data types without having to worry about the
details of MPI data-types.  Here's their URL.

	http://www.loria.fr/~ocoulaud/parapp.html

Personally, I advocate adopting a higher level approach.  I've written
a technical report about an experiment using C++ templates to provide
generic collections and algorithms with the spirit of the STL.  MPI is
used for the underlying communication, but a user need not think about
that.  The template library provides a single generic collection (a
parallel vector) and generic algorithms and combining functions for
parallel vectors of any type.  In short, the library provides a
data-parallel programming model for distributed address space parallel
computers.

The components of programs written using the library fall into four
main categories: 1) generic collection types, 2) generic combining
functions, 3) algorithms over collection types.  The fourth category
comprises user-defined data types -- which are the elements of a collection.

Algorithms on collections generalize to user-defined types and
operations.  Suppose a user defines a class of polar coordinates
called POLAR, with operator+ defined.  With the library, a user could
then define a parallel vector of POLARs, and the algorithm "add_scan"
(parallel prefix) will automatically be defined for this vector type.
The library handles wrapping the MPI scan functions and the
distribution of the vector over the available processors.  The TR,
called "A PORTABLE MPI-BASED PARALLEL VECTOR TEMPLATE LIBRARY", is
available at the following URL.

	ftp://riacs.edu/pub/Excalibur/excalibur.html

In general, I think that C++ based wrappers for MPI and other message
passing libraries should make use of the generic facilities of C++ so
that users need not describe their datatypes outside of the type
system of C++.  The data-type mechanism of MPI is very powerful, but
future efforts to embed MPI in C++ should attempt to hide the actual
construction of MPI data-types from the user, and should do so in a
manner consistent with C++.




