Newsgroups: comp.parallel.pvm,comp.parallel.mpi
From: mfergus@relay.nswc.navy.mil (Michael Ferguson)
Subject: Re: PVM send buffers
Organization: Naval Surface Warfare Center - Dahlgren Div.
Date: Tue, 26 Mar 1996 14:25:48 GMT
Message-ID: <mfergus.1178237988A@relay2.nswc.navy.mil>

In Article <4j6cva$qtk@newshost.lanl.gov>, rbarrett@ranger.lanl.gov (Richard
Barrett) wrote:
>
>MPI_Buffer_attach lets you designate the memory address for the send buffer.
>Lets you set up a memory management scheme for your application. 
>
>Our problem is that even though our code is portable, our target machine these
>days is a Cray T3D. Message passing is built on top of its shmem stuff, so
>sends are not safe (i.e. the send returns before all the data has been sent
>to the target process). So we're experimenting with MPI_Wait (blocks until the
>data has been sent vs. the buffering scheme. And as usual, this will be 
>MPI implementation dependent as well as problem dependent.
>

The above last remark by Barrett reminded me of a related problem I ran into
when trying to port a program written for the Intel Paragon to the Cray T3D.
I did not use MPI, instead, I tried using the "shmem stuff".  I also ran
into a similar problem with the 'shmem_wait' which I presume 'MPI_Wait'
uses, so I am very interested in the results of the 'MPI_Wait' experiment -
did it work?

