Newsgroups: comp.parallel.mpi
From: James Painter <jamie@lumen.acl.lanl.gov>
Subject: Re: MPI vs native-CRAY
Organization: Los Alamos National Laboratory
Date: 21 Aug 1996 18:49:49 -0600
Message-ID: <w4chgpww8cy.fsf@lumen.acl.lanl.gov>


impellus@ames.ucsd.edu (Tom Impelluso) writes:
> can someone advise me how much slower (if, at all)
> a code will run using MPI as opposed to native message passing
> on the T3D.

"Native message passing" on the T3D means CRI PVM, and performance of
EPCC/CRI MPI is comparable or better in my experience.

There is a more efficient low level communications interface called
SHMEM, but it can't really be described as a message passing API.
SHMEM is a one sided communications library which supports` "put" and
"get" operations to store things in or fetch things from remote
memory.

We've run a bunch of timing tests comparing PVM, EPCC/CRI MPI and
SHMEM on the T3D, as well as a home grown message passing API built on
top of SHMEM.

There is a webized paper in:
 http://sawww.epfl.ch/SIC/SA/publications/SCR95/7-95-4a.html.

See the "Timings" section for the timing graphs.

There are more timing graphs in http://www.acl.lanl.gov/Viz/aclmpl_timings.ps
and a postscript version of the paper in 
http://www.acl.lanl.gov/Viz/mpl_paper.ps.

--Jamie

