Newsgroups: comp.parallel.mpi
From: ravi@nereid.psc.edu (Ravishankar Subramanya)
Subject: Re: MPI vs native-CRAY
Organization: Pittsburgh Supercomputing Center
Date: 23 Aug 1996 18:19:31 GMT
Message-ID: <4vksnj$ds5@news.psc.edu>


In article <w4chgpww8cy.fsf@lumen.acl.lanl.gov>, James Painter <jamie@lumen.acl.lanl.gov> writes:
|> 
|> impellus@ames.ucsd.edu (Tom Impelluso) writes:
|> > can someone advise me how much slower (if, at all)
|> > a code will run using MPI as opposed to native message passing
|> > on the T3D.
|> 
|> "Native message passing" on the T3D means CRI PVM, and performance of
|> EPCC/CRI MPI is comparable or better in my experience.
|> 
|> There is a more efficient low level communications interface called
|> SHMEM, but it can't really be described as a message passing API.
|> SHMEM is a one sided communications library which supports` "put" and
|> "get" operations to store things in or fetch things from remote
|> memory.
Shmem supports a number of other functions including global reduction
operations and can be used with little effort on the part of the user.
It does make the code less portable - but the speed-ups over MPICH and
PVM are significant.

EPCC/MPI (CHIMP) is built on the shmem layer and therefore gives comparable
speeds for many communication operations.

-Ravi

