Newsgroups: comp.parallel.mpi
From: lederman@romano.cs.wisc.edu (Steve Huss-Lederman)
Subject: Re: Ordering of MPI messages in multi-threaded programs
Organization: CS Department, University of Wisconsin
Date: 20 Aug 1996 15:34:07 GMT
Message-ID: <LEDERMAN.96Aug20103407@romano.cs.wisc.edu>

In article <32196635.3C6B@inko.no> Anders Liverud <al@inko.no> writes:

 > Given a multithreaded application, having two threads per process, where 
 > each thread pair uses its distinct tag, the following situation might 
 > occur:
 > 
 > Rank 0 - Thread 0 : MPI_Send( ..., tag0, ...)
 > Rank 0 - Thread 1 : MPI_Send( ..., tag1, ...)
 > Rank 1 - Thread 0 : MPI_Recv( ..., tag1, ...)
 > Rank 1 - Thread 1 : MPI_Recv( ..., tag0, ...)
 > 
 > Clause 3.5, pg. 31, MPI Std. 12 June 1995, states that these calls are 
 > unordered. Is this program "unsafe"?

I cannot speak for the whole Forum, but I suspect the above example
could hang.  In MPI-1, the interaction with threads is undefined.
Thus, I think it is ok for Rank 0, thread 0 to block all other threads
on rank 0.  Similarly, Rank 1, thread 0 could block all others on Rank
1.  This could cause the program to hang.  One would hope that a good
thread package would schedule another thread if one blocked but I
don't think MPI-1 requires this.

MPI-2 is looking into the possibility of defining how MPI would work
IF threads exist in an MPI implementation.  This issue is tied up in
how MPI-2 will deal with handlers that are called when an event
happens - when must they get called, how soon, etc.  These issues are
in the latest MPI-2 drafts in the external interfaces chapter if
anyone wants to see the current thinking.  I emphasize that the Forum
has not yet seriously addressed this issue yet so the text is very
tentative.

Steve

