Newsgroups: comp.parallel.mpi
From: raja@convex.com (Raja Daoud)
Subject: Re: Ordering of MPI messages in multi-threaded programs
Organization: Hewlett-Packard Co.; Convex Division
Date: 24 Aug 1996 00:43:20 -0500
Message-ID: <4vm4po$nvt@tbag.rsn.hp.com>


>  *  Algorithms consisting of multiple weakly-coupled tasks

>  *  Machine architectures consisting of cluster of processors, one
>     shared memory space per cluster.

Request for clarification: do these scenarios fit the following model?

	while (not_done) {

		do_MPI_communication();

#pragma do parallel over i
		for (i in computation_domain) {
			compute(i);
		}
	}

or do they require all threads in the process to make MPI calls
concurrently?

--Raja

-=-
Raja Daoud					Hewlett-Packard Co.
raja@rsn.hp.com					Convex Division

