Newsgroups: comp.parallel.mpi
From: gdburns@osc.edu (Greg Burns)
Subject: Re: Ordering of MPI messages in multi-threaded programs
Organization: Ohio Supercomputer Center
Date: 20 Aug 1996 06:57:46 -0400
Message-ID: <4vc5nb$bkl@tbag.osc.edu>

In article <3219826C.2213@inko.no> Anders Liverud <al@inko.no> writes:
>> > Given a multithreaded application, having two threads per process, where
>> > each thread pair uses its distinct tag, the following situation might
>> > occur:
>> >
>> > Rank 0 - Thread 0 : MPI_Send( ..., tag0, ...)
>> > Rank 0 - Thread 1 : MPI_Send( ..., tag1, ...)
>> > Rank 1 - Thread 0 : MPI_Recv( ..., tag1, ...)
>> > Rank 1 - Thread 1 : MPI_Recv( ..., tag0, ...)
>> >
>> > Clause 3.5, pg. 31, MPI Std. 12 June 1995, states that these calls are
>> > unordered. Is this program "unsafe"?
>> 
>> Eric Salo replys:
>>
>> Well, first you have to locate a thread-safe implementation of MPI!
>> Assuming that you somehow obtain one, I don't see any problem with the
>> above code. The messages are unordered, yes, but that would only be a
>> problem if they shared both the same communicator and the same tag.
>
>Anders replys:
>
>Assume the messages use the same communicator, but different tags as shown
>above. This 
>would require buffering of the first MPI_Send. Is it a requirement for the MPI 
>implementation to handle this, or is this an "unsafe" program ?


Both sends are posted.  The blocking of one does not stop the other.
Therefore the implementation knows about them and they will progress,
even if the implementation has no place to buffer the data.  The
example is not unsafe.

I'm trying to decide if enough users want/need a thread-safe MPI
so we can put this feature into LAM.  I can think of a few reasons
not to do it, one of which is the performance hit due to locking
that the implementation will surely incur.  I welcome any replies
(private) giving reasons to the contrary.

-=-
Greg Burns				gdburns@osc.edu
Ohio Supercomputer Center		http://www.osc.edu/lam.html

