Newsgroups: comp.parallel.pvm
From: cameron@epcc.ed.ac.uk (Kenneth Cameron)
Subject: Re: MPI_Bcast problems?
Keywords: MPI
Organization: Edinburgh Parallel Computing Centre
Date: Wed, 15 Jun 1994 10:59:29 GMT
Message-ID: <CrFqJ6.4no@dcs.ed.ac.uk>

In article <2tl2p4$rmc@msuinfo.cl.msu.edu>, nupairoj@cps.msu.edu (Natawut Nupairoj) writes:
> In article <2tjlb9INNhgv@dubhe.anu.edu.au>, sits@cs.anu.edu.au (David Sitsky)
> writes:
> [stuffes deleted]
> |> 
> |> Although a non blocking MPI_Bcast operation may solve this problem, I'm just
> |> curious why the MPI standard doesn't include a function MPI_Mcast which
> |> has the same semantics as pvm_mcast (ie non-collective call but message appears
> |> as an ordinary message to the destination processes).
> |> 
> |> This seems to me like an important function that isn't present in MPI.  Is there
> |> some reason why it wasn't included?  Are there any workarounds?
> 
> MPI has a "group" concept which allows you to create your own communication
> domain.  Thus, to do multicast in MPI, you can just create a new group and then
> use MPI_Bcast.
> 
> Natawut.
> nupairoj@cps.msu.edu

Natawut,
The thing about MPI_Bcast is that it's part of the collective operations which
means the message can only be received by a matching MPI_Bcast call. To broadcast
messages within a group which can be picked up by an ordinary receive you
have to use an ordinary send.

David,
As a workaround, how about something like:
	MPI_Comm_size(comm,&gsize);
	for(I=0;I<gsize,I++)MPI_Send(buffer,count,datatype,I,tag,comm)

-- 
e||) Kenneth Cameron (kenneth@ed)     Edinburgh Parallel Computing Centre e||)
c||c Applications Scientist, KTG.       University of Edinburgh, Scotland c||c
"Do not write obscure code. When you ignore this rule, try to make clear ... "
                                           - From a coding standards document.

