Newsgroups: comp.parallel.mpi,comp.parallel.pvm
From: gdburns@osc.edu (Greg Burns)
Subject: MPI or PVM?
Organization: Ohio Supercomputer Center
Date: 9 Dec 1994 16:44:46 -0500
Message-ID: <3caj4e$sa6@tbag.osc.edu>


Some locally expired article questioned the existance of reasons
to choose MPI over PVM when writing message-passing codes.
Since we are working on MPI and obviously prefer it, some
possible reasons came to mind.


Top 10 Reasons to Prefer MPI Over PVM

10. MPI has more than one freely available, quality implementation.
9.  MPI defines a 3rd party profiling mechanism.
8.  MPI has full asynchronous communication.
7.  MPI groups are solid and efficient.
6.  MPI efficiently manages message buffers.
5.  MPI synchronization protects 3rd party software.
4.  MPI can efficiently program MPP and clusters.
3.  MPI is totally portable.
2.  MPI is formally specified.
1.  MPI is a standard.


Top 10 Reasons to Prefer MPI Over PVM (annotated)

10. MPI has more than one freely available, quality implementation.
    There are at least LAM (ftp://tbag.osc.edu/pub/lam),
    MPICH (ftp://info.mcs.anl.gov/pub/mpi) and
    CHIMP (ftp://ftp.epcc.ed.ac.uk/pub/chimp).  The choice of development
    tools is not coupled to the programming interface.

9.  MPI defines a 3rd party profiling mechanism.
    A tool builder can extract profile information from MPI applications
    by supplying the MPI standard profile interface in a separate
    library, without ever having access to the source code of the
    main implementation.

8.  MPI has full asynchronous communication.
    Immediate send and receive operations can fully overlap computation.

7.  MPI groups are solid and efficient.
    Group membership is static.  There are no race conditions caused
    by processes independently entering and leaving a group.  New group
    formation is collective and group membership information is distributed,
    not centralized.

6.  MPI efficiently manages message buffers.
    Messages are sent and received from user data structures, not from
    staging buffers within the communication library.  Buffering may, in some
    cases, be totally avoided.

5.  MPI synchronization protects the user from 3rd party software.
    All communication within a particular group of processes is marked with
    an extra synchronization variable, allocated by the system.  Independent
    software products within the same process do not have to worry about
    allocating message tags.

4.  MPI can efficiently program MPP and clusters.
    A virtual topology reflecting the communication pattern of the application
    can be associated with a group of processes.  An MPP implementation of
    MPI could use that information to match processes to processors in a
    way that optimizes communication paths.

3.  MPI is totally portable.
    Recompile and run on any implementation.  With virtual topologies
    and efficient buffer management, for example, an application moving
    from a cluster to an MPP could even expect good performance.

2.  MPI is formally specified.
    Implementations have to live up to a published document of precise
    semantics.

1.  MPI is a standard.
    Its features and behaviour were arrived at by consensus in an open forum.
    It can change only by the same process.

-=-
Greg Burns				gdburns@tbag.osc.edu
Ohio Supercomputer Center		http://www.osc.edu/lam.html

