Newsgroups: comp.parallel.mpi,comp.parallel.pvm
From: "Richard Kaufmann" <richard@ilo.dec.com>
Subject: Re: Memory Channel: Message Passing Latency 5.8 usecs, ~60 MB/sec @ 4KB messages
Organization: Digital Equipment Corporation
Date: 26 Aug 1996 16:17:35 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Message-ID: <01bb9369$cdd729e0$9ec0b710@richard.ilo.dec.com>

I've been really pleased to see the activity around low-latency messaging. 
There are a few potential sources of confusion, and I hope this note will
help clear them up.

1. Paco Romero (romero@convex.com) rightly points out that there have been
faster PVMs and MPIs.  In fact, Digital's HPF, PVM and MPI have been at
five microseconds or under for more than a year now.  I just received the
latest HPF latency numbers for our AlphaServer 4100, and they're around
three microseconds.

However, this Digital HPF latency quote and Paco's numbers for the Convex
are for *shared memory* within an SMP, and not for inter-node
communication.

2. The Memory Channel latency numbers (5.8 usecs one-way latency for HPF,
6.9 usecs for MPI and 8.0 usecs for PVM) are for communicating between
separate SMP systems.

3. The Convex numbers weren't just for shared memory, they were for *local*
shared memory.  In e-mail I received from Paco, the Convex "non-local"
shared memory case ranges from 11 to 14 microseconds on the SPP1600.  The
SPP series is a NUMA machine; NUMA architectures can charge as much as a 5X
to 10X latency penalty for accessing non-local memory.

If anybody has some other hot numbers for PVM, MPI or HPF, please send them
to me.  I'm going to update the paper pretty soon, and would like the
latest and greatest.

Regards,

Richard Kaufmann
richard@ilo.dec.com

http://www.digital.com/info/hpc/ (for general HPC info from Digital)
http://www.digital.com/info/hpc/ref/refdoc.html (for the Hot Interconnects
paper)
http://tc-www.ilo.dec.com/~richard/home.html (personal home page)

p.s. MEMORY CHANNEL is a trademark of Encore Corporation.

