Newsgroups: comp.parallel.pvm
From: sits@cs.anu.edu.au (David Sitsky)
Subject: MPI limitations?
Keywords: MPI
Organization: Australian National University
Date: 22 Jun 1994 01:01:26 GMT
Message-ID: <2u82h6INNhpi@dubhe.anu.edu.au>

This is an extract from Greg Burns' "skeleton for a dynamically load balanced 
master/slave application".

This is the last bit of code from the master function, which sends a die
message to all of the slaves so the application can terminate properly.

/*
 * Tell all the slaves to exit.
 */
        for (rank = 1; rank < ntasks; ++rank) {
                MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD);
        }
}


This is yet another candidate for an MPI_Mcast like operation.  Rather 
than looping through using MPI_Send, it could be much more efficient if 
MPI provided this operator.  The collective operation MPI_Bcast can't 
be used since the slaves don't know in advance when the master is going 
to "initiate" this operation.

The above code could be better expressed and executed on machines which 
support efficient broadcast operations as something like:

/*
 * Tell all the slaves to exit.
 */
	MPI_Mcast(0, 0, MPI_INT, DIETAG, MPI_COMM_WORLD);
}

which has the same parameters as MPI_Send, but doesn't require the destination
rank parameter since this message is sent to all members in the communicator
MPI_COMM_WORLD (except itself).

Given that these kind of termination/cleanup phases are fairly common in 
parallel programs, and there are many other situations which require a
multicasting type operator, is this a limitation of MPI?  Will MPI-II
address this problem?

Cheers,
David Sitsky


