Newsgroups: comp.parallel.mpi
From: jlu@cs.umr.edu (Eric Jui-Lin Lu)
Subject: Re: How are MPI_Send and MPI_Isend implemented in MPICH?
Organization: Computer Science Dept, Univ. of Missouri-Rolla
Date: 28 May 1995 04:45:15 GMT
Message-ID: <3q8v4r$ls1@hptemp1.cc.umr.edu>

In article <3prhe0$sr9@fido.asd.sgi.com>,
Eric Salo <salo@mrjones.engr.sgi.com> wrote:
>
>In article <3poft8$qbb@hptemp1.cc.umr.edu>, jlu@cs.umr.edu writes:
>> Hi *,
>> 
>> How are MPI_Send and MPI_Isend implemented in MPICH? Thanks!!
>
>Depends on which ADI you use. With the ch_p4 device, for example, the
>data is buffered. With the device that we are working on (and which will
>hopefully be out soon in 1.0.9) there is virtually no buffering.
>
>Eric Salo         Silicon Graphics Inc.             "Do you know what the
>(415)390-2998     2011 N. Shoreline Blvd, 7L-802     last Xon said, just
>salo@sgi.com      Mountain View, CA   94043-1389     before he died?"

Thanks for the info. It's my fault without listing the details. Since I
am using ch_p4 on IRIX 5.3 and Solaris 2.3 as well as intelnx on
iPSC860, I like to know both devices implementations.

According to what you said above and the MPI standard, is it true that

1. If there is a recv() posted, MPI_Send() sends directly to recv() and
   then return, while MPI_Isend() returns immediately. Internally,
   send() sends data to recv().
2. If there is no recv() posted, MPI_Send() buffers it locally (or at
   remote end?) and then return, while MPI_Isend() return immediately.
   Internally, send() buffers the data locally (or at remote end)??

Please correct it (or comment). Thanks!!


  --Eric
-- 
***************************************---       Eric Jui-Lin Lu        ---*
* Obviousness is always the enemy of  *   \      jlu@cs.umr.edu        /   *
* correctness.  -- Bertrand Russell   *   / http://www.cs.umr.edu/~jlu \   *
***************************************---   Univ. of Missouri-Rolla    ---*

