Newsgroups: comp.parallel.mpi
From: Bill Saphir <wcs@best.com>
Reply-To: wcs@best.com
Subject: Re: SGI's native MPI
Organization: Best Internet Communications
Date: Mon, 26 Aug 1996 21:06:05 -0700
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <3222742B.318A@best.com>

Sung-Eun Choi wrote:
> 
> I have a question about SGI's native implementation of MPI for the
> PowerChallenge.  If you have experience with this, please read on..
> 
> I've ported a "working" MPI application ("working" under the MPICH
> implementation) to the PowerChallenge.  In doing so, I rewrote the
> code to abide by the restrictions of the SGI implementation as stated
> in the man page.  Specifically, by default, SGI's MPI_Send() does not
> buffer shared memory messages that are longer than 80 bytes in length.
> 
> They suggested two ways around the problem:
> 
>      1) Rewrite the code to use either MPI_Isend() or MPI_Bsend().
> 
>      2) Set MPI_SHMEM_BUFFER_THRESHOLD as needed.
> 
> I chose the first option, but it did not solve the problem.  I still
> need to set MPI_SHMEM_BUFFER_THRESHOLD (which incurs a performance
> hit).
> 
> Does anyone know if there are other restrictions that are not
> documented in the man pages?  Is there any on-line documentation other
> than the man pages?

Did you use Isend or Bsend for option 1? Unless you have
an unusual application, you probably want to use Isend(). 
There aren't any bugs that I know of with Isend() in SGI's
MPI. Do you know where (in where routine) your code
is blocking? Sounds like you might have a program bug. 

Incidentally, the small amount of buffering done by SGI's 
MPI_Send() is within the letter (and by my reading, the spirit)
of the MPI standard. MPI applications shouldn't rely on *any* 
buffering. 

Bill

