Newsgroups: comp.parallel.mpi
From: jamie@lumen.acl.lanl.gov (James Painter)
Subject: Re: MPICH and CRI PVM Performance on the T3D.
Organization: Los Alamos National Laboratory
Date: 14 Oct 1995 00:51:55 GMT
Message-ID: <JAMIE.95Oct13185155@lumen.acl.lanl.gov>

spike@trinity.llnl.gov ( Richard J. Procassini ) writes:
> 	I've got some timing results from the Cray T3D MPP that seem
> to make no sense whatsoever.  I am comparing the performance of the
> non-blocking, unbuffered (??) sends and receives available within MPI
> and the buffered blocking sends and non-blocking receives in CRI's
> PVM.  Consider the following code fragment, which is used to perform a
> boundary exchange of data in an explicit, unstructured mesh code:
> ..

It depends a lot on your message sizes.  The non-blocking MPI send
calls seem to have a lot of overhead on the T3D causing high latency
for short messages.  At really large message sizes (e.g. 1MB) they
approach shmem_put speeds.

We have a lot of timing numbers comparing EPCC-MPI, CRI-PVM, CRI-SHMEM
and a home grown message passing library.  See the links describing
ACLMPL in http://www.acl.lanl.gov/Viz/.

--Jamie

