Newsgroups: comp.parallel.mpi
From: patrick@walt.CS.MsState.Edu (Patrick G. Bridges)
Subject: Re: MPI for shared memory machines?
Organization: Mississippi State University CS Dept.
Date: 05 Aug 1994 14:44:11 GMT
Message-ID: <PATRICK.94Aug5094411@walt.CS.MsState.Edu>

>>>>> "LB" == Luis Barriga <luis@gaia.electrum.kth.se> writes:
In article <LUIS.94Aug4233243@gaia.electrum.kth.se> luis@gaia.electrum.kth.se (Luis Barriga) writes:


    LB> Well, it seems like overkill. MPI on top of P4 which is on top
    LB> of SYSV which itself is a mess. 

I agree. That's why we don't usually do it. We just use MPI on top of
P4's message passing, which normally uses sockets. This performs quite
well on workstation clusters. On other machines, such as MPPs,
ANL/MSU's MPI implementation uses the native message passing
system. (NX on the Meiko, Paragon, etc, eui or euih on the SP1, CMMD
on the CM5, etc) This of course, yields even better performance.

    LB> I am also intererested in high
    LB> performance and I look forward to is MPI/P4 on on top of
    LB> user-level threads that Solaris is offering.

You mean using the threads to run the different parts of your program
and using their shared memory to communicate?  THis would seem
difficult to do, since all of the processes would have to run in the
same address space, causing all sorts of headaches. Now, having each
program be multi-threaded and using sockets to communicate would be
much more doable...

    LB> It seems that an ad-hoc implementation that follows the
    LB> standard would be faster until better times.  

I don't really follow your meaning...
--
*** Patrick G. Bridges  		patrick@CS.MsState.Edu ***
***      PGP 2.6 public key available via finger or server     ***
***             PGP 2.6 Public Key Fingerprint:		       ***
***      D6 09 C7 1F 4C 18 D5 18  7E 02 50 E6 B1 AB A5 2C      ***
***                #include <std/disclaimer.h>		       ***

