Newsgroups: comp.parallel.mpi
From: luis@gaia.electrum.kth.se (Luis Barriga)
Subject: Re: MPI for shared memory machines?
Organization: The Royal Institute of Technology, Stockholm, Sweden.
Date: 04 Aug 1994 21:32:42 GMT
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <LUIS.94Aug4233243@gaia.electrum.kth.se>

patrick@walt.CS.MsState.Edu (Patrick G. Bridges) replies:

   MPI is a message passing standard, and, as such, supports only message
   passing semantics. However, implementations are free to use whatever
   technique they want to implement message passing semantics. The
   refrence implementation from Argonne and Mississippi State can sit on
   top of p4 (this is the prefered device for workstation clusters) and
   use p4's shared memory for message passing. This requires changing the
   options P4 is compiled with to define SYSV_IPC.

Well, it seems like overkill. MPI on top of P4 which is on top of
SYSV which itself is a mess. I am also intererested in high
performance and I look forward to is MPI/P4 on on top of user-level
threads that Solaris is offering.

"   I don't know about the others, but Argonne/MSU implementation 
   can when configured with :
   configure -arch=solaris -device=ch_p4
"

It seems that an ad-hoc implementation that follows the standard would
be faster until better times.
--
-------
Luis Barriga				Tel:    +46-8-752 1379
Royal Institute of Technology		Fax:    +46-8-751 1793
Dept. of Teleinformatics                E-mail: luis@it.kth.se
S-164 40 Stockholm                      WWW:    http://www.it.kth.se/~luis
Sweden

