Newsgroups: comp.parallel.mpi
From: fap@acsu.buffalo.edu (Frank A Pellegrino)
Subject: MPI over ATM
Organization: UB
Date: 12 Sep 1995 18:58:03 GMT
Message-ID: <434l7r$od4@azure.acsu.buffalo.edu>


MPI has been ported to ATM API provided by Fore Systems.

The main goal of the modification of MPI was to support Fore's ATM API
while also maintaining interoperability - the ability to utilize
machines not containing ATM and interconnected with Ethernet, FDDI,
etc.  This work is part of the "Affordable High Performance Computing"
effort being funded by NASA (NAG3-1548).

MPI is allowed to first make its default socket connections.  When a
socket is used for the first time after the initial connection, the
connection is renegotiated between machines if they can establish an
ATM connection through Fore's ATM API.  In this way, the machines are
allowed to communicate using the best possible path - ATM, Ethernet,
etc.  The P4 message passing library, included in the
Argonne/Mississippi State implementation of MPI (MPICH), was modified
to support the ATM API sockets and connection renegotiation.

As has been noted before, much of the advantage of high speed networks
is usually squandered in protocol and message passing library overhead
(see http://piranha.eng.buffalo.edu/ for papers etc).  There is
considerable overhead with both TCP/IP over ATM and the standard FORE
ATM API connection.  The FORE ATM API also does not provide guaranteed
reliable data transfer as with TCP so some form of transport protocol
would still be needed (as with PVM who has their own transport
protocol so they can use UDP).

We have taken the FORE ATM driver, and have enhanced it with the
primary motivation of supporting distributed computing.  We have
streamlined the FORE code, reducing buffer copies, added cell pacing
support and have added a transport layer. The transport layer has been
added without adding any additional buffer copies and has been
optimized for ATM rather than the general environment of TCP.  The
transport layer protocol has been pushed down the stack to the device
driver level and avoids involving the operating system as much as
possible. This allows the API to have the reliability advantage of
TCP/IP without its overhead.

The general availability of the MPI port will be announced in a future
posting but we are looking for potential beta testers.


Frank A. Pellegrino

****************************************************************************
Dept. of Electrical Engineering, State University of New York at Buffalo
E-mail : fap@eng.buffalo.edu            http://piranha.eng.buffalo.edu/







